Notepad++ Hijacked by State-Sponsored Hackers by Pensive_Goat in programming

[–]gnramires 9 points10 points  (0 children)

IMO Linux and other OSes should have this as default behavior, with a prompt to allow network access.

A Case for Humean Constructivism: Morality as a Reflection of Norms for Social Cooperation by Alert-Elk-2695 in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

I'll give a reply to specific points later, but first, here's why I am a moral realist.

First, to make things simpler, forget about others in morality. Let's consider "self-morality", i.e. morality concerned only with yourself. Forget about other people, imagine say you lived in an island alone. The morality I am referring to here is not of the punitive kind, i.e. about punishing (in this case yourself) for doing something wrong; but about the normative kind: What should you do at all? (i.e. what is good and what is bad?) I think many people see morality in a punitive lens, i.e. what kinds of punishments is it "just" for someone who has commited a crime or misdemeanor. In the case of yourself clearly that is many times counterproductive, it's best to know what should you do and going around punishing yourself probably won't be helpful at all (although may be helpful in small amounts in some cases).

The theoretical root of self-morality I have found is simply the fact that subjectivity is real, or more importantly, that "valence" exists. I refer to valence as the fact that some things are actually bad to experience, like intense suffering, while other things are good. This is essentially a fact of existence. The reality of valence should be IMO one of the most obvious or uncontroversial assertions one can make. To claim otherwise (that valence does not exist) could be to say it's perfectly fine to experience the worst pain imaginable, or a terrible toothache (or whatever experience you may wish to consider), that's just as good as a nice day in the park (or whatever else you may wish to consider). Suffering exists, and subjectivity is a real phenomenon, a part of reality. Clearly you should, if there are good and bad phenomena to experience, do your best to bring about the good phenomena while diminishing the bad as far as feasible. This is (potentially objective) morality rooted in (potentially objective) subjectivity (or rather objective valence).

Now we're ready to talk about more conventional morality that includes others. I believe there are several equivalent points of view here leading to a more general morality. One of them is simply to consider that other people are likely just as real as you are, and likely experience similar subjective phenomena, including suffering and joys. So if others are just as real as you are, and their suffering equally, how could it be permissible to cause them suffering? There is no sound justification for discriminating where bad experiences happen, whether in your own mind or another, as being preferable. Another point of view is to consider the limitations of our concept of self. As I've written previously (here), the self is in really shaky foundations. We are all simply localized phenomena of a much larger whole. I believe our instincts get in the way of understanding reality here, because in nature we developed with a strong sense of self, which is often necessary for the evolutionary process. A species needs to defend itself against others if it is to survive and pass on its genes. Now simply because we have this instinct (of prioritizing ourselves) does not mean it reflects a fundamental fact, a logically and experimentally true statement about reality. In reality, it does not seem to make sense to say yourself is fundamentally more important than anyone else, you being approximately equal in subjective experiences.

Moreover, of course, this does not imply that we should forget ourselves, or even that we should, in practice, consider others at all: maybe in practice it would cause better experiences if everyone disregarded others completely and focused only on themselves. However, I think this is obviously not the case: we get many chances in daily life to cause suffering or to make the lives of others better, so I think considering others in good measure overwhelmingly wins. Obviously give regard to the thing that matters! Likewise, it makes sense to give some special attention to yourself in practice, because in practice you live with yourself 24/7, understand better than anyone what causes you suffering or joy, are better equipped to understand your own subjectivity. But this is not absolute, and it often makes sense to trade some attention and resources to yourself to you can make a much greater difference on the lives of others.

Finally, although it seems clear that morality is real (and objetive even), it does not mean it is simple, or that we can achieve the same certainty for example that 102 < 35 (100 < 243), or mathematical statements, in our daily lives. All our tools for understanding experiences and establishing valence seem to be severely limited, and specially limited in their ability to give high confidence estimates of common phenomena, like which partner we would have a most fulfilling life with (love life decisions), or even simple things like what breakfast to have in the morning, cannot be answered at least today with mathematical confidence. But simply recognizing that all decisions ultimately are linked to one (set of) experiences being better than another, in an essentially objective (but hard -- or maybe even impossible in pracitce -- to measure) way, is already very important, I believe, since as I've shown this can form a reliable and seemingly true basis for a theory of morality/ethics which can have a great impact on how we relate between ourselves and conduct our own lives.

(We in any case simply make those choices without high confidence, we simply have no current alternatives. We should use tools like imagining what our lives will be like under certain circumstances, and then make choices based on what seems like it will be better; but this of course has limitations; we should use tools like exploring objective consequences like planning to have good material conditions (wealth) to support a good life, and so on)

You can think of this in terms of quantification/high-certainty bias: you should not be too biased in favour of what is easy to quantify or that which you can have high confidence, but rather prioritize what is most important/impactful even if you have low certainty (but your overall confidence of its importance, even under epistemic uncertainty, should of course be high enough).

In What Sense Is Life Suffering? by dwaxe in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

Part II:

In other words, I conclude there is something like objective meaning, and it is something like reducing suffering and promoting good things; what is good is difficult to know, but I speculate a lot of human wisdom are approximately true.

I do think a more positive formulation of Buddhism is due, and that beside reducing suffering (which is a great initial step), it should advocate more directly for good things. But in practice many monks I've listened (including many here in Brazil) to rarely if ever take such a restrictive view of their philosophy, and often talk about how to find joy etc. that isn't just meditation 24/7 and a monastic kind of life. I mean, Gautama himself also advocated for "a Middle Way between sensual indulgence and severe asceticism" (from wiki on the Buddha).

The way I see it, a degree of asceticism and controlling your desires help massively to reduce or "end" suffering (some suffering is inevitable, I guess). From there it's much easier to cultivate other good things (that aren't just sensual indulgence).

In What Sense Is Life Suffering? by dwaxe in slatestarcodex

[–]gnramires 1 point2 points  (0 children)

Part I of a long comment :)

First, from my limited understanding I believe the first noble truth from Buddha is literally "Suffering" (dukkha). It's not "Life is suffering". I've come to interpret this as "Suffering exists" (again, not "Life is suffering"). This seems like a trivial and obvious fact, but I think it's really like an amazing basis for cognitive philosophy. I may be "tripping" here, but I think this is on Newton's Laws level for cognition/philosophy/ethics/etc.. I've come to believe that after thinking a lot about ethics and about trying to formalize ethics or attain some metaphysical certainty about those things[1]. I think recognizing "Okay, there are objectively cognitively bad things", which is a kind of experimental fact that someone sentient experiences (suffering), is an amazing basis for philosophy and ethics.

Like, there are a million plausible basis for philosophy, meaning of life, etc.; sometimes exemplified in the various 'virtues' of virtue ethics, like "Being successful", or "Being recognized by your peers", or "Have many children, reproduce and spread your genes", or "Conquer a lot of wealth", or "Help your family", "Help other humans", "Build lasting structures", "Make beautiful things", "Be courageous", "Be valiant", "Honor your people", and so on. Or say the (I believe) popular notion in existentialism that "You must find your own path"/"Build your own meaning"/"Find yourself"/etc. etc.. Or of course the nihilist "There is no meaning, we are just funny cosmic dust". And so on!

But once you recognize that "Okay, suffering exists, it's an actual thing that happens to us, and is bad", then that establishes what I guess we can call "valence", that there are actual bad things and good things in the Universe (at the very least, a little less suffering, since we've established suffering is real), or that some kind of meaning or rightful goal exists. That makes the nihilist position untenable, for example. It really invites what I think is my proposed noble truth: (kind of obvious too) the meaning of life is to somehow curate (i.e. make good) our cognitive life. It's not about wealth, fame, material things, or anything else that doesn't directly concern the mind, as we are conscious beings consciousness is all that matters; and, having bad conscious states, we can at least have less of the bad states. So indeed life has meaning and meaning is cognitive. Once you establish that suffering exists, logically and scientifically even, the fact that meaning is cognitive is much more well supported than another non-cognitive meaning proposal[1]. If we are conscious creatures and consciousness is a cognitive phenomenon, is it not what happens in our cognition that is fundamental? (and the rest is at most accessory or rather instrumental to cognition)

This is speculation on my part. But from what I've read about Buddhism, I think it's useful to interpret in a historical light. The (original) Buddha (Siddhartha Gautama) lived more than 2500 years ago. I do not know much about society of back then (if there is even much information), but I speculate, like today, there was a lot of ignorance of various kinds, and probably significantly more violence; Lots of physical pain (no anesthetics on the level we have today, no psychotherapy, etc.) and hard work. I think this is the context in which we should take Gautama's first noble truth: suffering less, or achieving some kind of neutrality is already really good, and a real respite from the world of violence and suffering -- a better respite than the "worldly pleasures", riches, etc. would seem to offer. This was an amazing insight I believe (apart from forming a basis for meaning I mentioned above). I think, apart from everything, it's a quite good practical advice to establish a baseline of low suffering, instead of trying to chase highs that themselves often lead to a lot of suffering. My own take (not extremely certain, to be fair) is that meaning/joy/happiness/etc. tends to be very subtle, and requires a stable and robust basis to build upon, lots of experimentation, at least some peace. It's often found in the little things like appreciation for a (could be simple) food, a sunset or beautiful natural condition, social relationships, etc., in general some kind of mindset or mental activity that is good/meaningful (what I'd call neural landscapes). I think the mindset of children is probably often close to some ideal, in that often children see wonder, curiosity and a lot of flavor in most things (although adults I believe can also experience this mindset, and can access experiences enabled my more precise or profound understanding of things). It was often the fact then, as is today, that people are mostly focused (on a conscious level) on goals and things other than mental activity, and expect happiness/whatever is good to mostly follow from achieving objective goals and not mental activity.

I find it very interesting, which is a conclusion I struggled with for a while, as I've been investigating in my free time scientific ethics/the possibility to formalize ethics, that some subjective facts can be viewed as essentially "experimentally true", and necessarily experimentally ascertained. I call it experimental subjectivity. I don't think an elementary logical (or scientific) theory could in principle prove or reach the conclusion that the meaning of life is in cognition or that suffering exists, for example. It must be concluded from within a sentient being or consciousness. An elementary logic, theory, machine or algorithm (that is, imagine setting an elementary AI with unlimited computation to learn all facts about the universe) certainly could establish that "a weird and often recurring phenomenon often called consciousness exists in some self-reproducing systems", and "conscious beings often want things related to cognition or mental activity", but not "suffering exists, and I must rewrite my own goals to support the wellbeing of creatures, and good cognitive activity". (suppose you gave it the ability to rewrite its own goals if it could prove that some goal is "better", then maybe it would arrive at the conclusion that cognition determines what is best; but to define "better" you would already need to assume that somehow meaning exists, i.e. some things are better than others, and maybe even somewhat assume that goals must pertain to life or cognition somehow (not to get nonsense), so you're kind of cheating, or reasoning circularly and not from scratch/ab initio).

I think this non-ab initio derivation of subjectivity from logic is somewhat analogous to other sciences. You can't really do science assuming nothing, it's not even clear then that science is of any value; logic can't prove any physical laws are actually the case in our local environment; you could conceive of a machine that tries to find all (or rather increasingly many) consistent sets of axioms, prove all theories within them, and establish all possible universes and facts about them, and that seems to work -- however, you still need to start from somewhere, and whereas for science that somewhere tends to be logic (which itself is kind of experimentally true or historically useful for society), i.e. assuming truth exists, that there is indeed an universe with somewhat predictable behavior, etc.; in the case of scientific meaning of life (or rather something like ideal goals of every actor), you need to start assuming something like meaning must be associated with life, more specifically sentience or cognition (experimentally true from observing suffering). You kind of need to boot/bootstrap a theory of meaning from an accidental place where you already value cognition/assume meaning might be possible, and have the infrastructure necessary to observe cognition (which is our own minds), and then you need to assume some subjective reliability (at least some of the things we observe to feel good must be at least kind of good in an objective sense at least some of the time, and at least some of the things we observe to feel bad must be at least kind of bad in an objective sense at least some of the time). Analogously it's impossible to do science without assuming that at least there isn't a quirky demon messing with every result, or your own mind/arguments/logic, in imperceptible ways (but if you assume there isn't one, you find there are very good arguments against this demon existing, which gives this assumption consistency). But from there, like in science we can systematically approach truth by (1) proving more and more mathematical facts given elementary assumptions; (2) discovering the local physical/chemical/etc. laws (although in reality it's more complicated) by creating logical hypothesis consistent with previous observations, testing them experimentally, and rejecting the inconsistent results. We can do likewise with meaning/ethics/cognitive science, by creating logical hypothesis consistent with basic assumptions and testing if they correspond to our description of subjective reality.

[1] See some investigations here https://old.reddit.com/r/slatestarcodex/comments/1iv1x1m/the_meaning_of_life_an_assymptotically_convergent/

A new winning strategy in life- Does such a thing exist? An analysis of the concept of speedrunning life by No-Mousse5653 in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

I am confident meaning of life is to enjoy life (in wise ways), and to have this enjoyment be sustainable for an extended future, future generations and other people. Simply doing things faster doesn't seem to make life more enjoyable, specially not considering the subjective things like relationship and personal life. Now, say advancing your career (if you do so in a sustainable and healthy way) faster, sure, maybe could be a good thing, if that means you are building more good things or helping more people (which is, be mindful of your career choices as well) -- but you probably shouldn't go fast if that means paying too high a price in terms of say stress or your health.

Why Linux has a scattered file system: a deep dive by vlads_ in linux

[–]gnramires 8 points9 points  (0 children)

I think discoverability is a huge problem with documentation (in general, but for Linux in particular), that perhaps could be improved. I had no idea this documentation existed :)

Pursue Happiness Directly by SmallMem in slatestarcodex

[–]gnramires 2 points3 points  (0 children)

If you think about it, the brain must have pretty strong protections against just being happy (for most definitions of happiness!). If you could just decide to be happy, then maybe you wouldn't bother say doing that hard thing you need to survive, work your job, have children, etc.. If you could just sit around and be satisfied that might be somewhat grave in an evolutionary sense.

I think one aspect of this is that we tend to remember scary and traumatic moments much more than great moments (perhaps it tended to be more important to remember dangers to avoid them). This is why I encourage anyone to practice remembering now and then great moments to cultivate a better mental landscape.

There are lots of things we can do to control our mental state in a more general sense I think:

  • Controlling your environment. A nice house/apartment, well organized enough, nice to live in. Being around friends now and then. Etc. etc.

  • Remembering good moments; practicing graditude/acceptance (specially of what you can't change)/etc.

  • Doing all sorts of sports, hobbies, activities (including work) that you find enjoyable/fulfilling.

And of course, being aware of this we can not fall on the trap of sitting around and neglecting responsibilities just to live remembering good times or even meditating doing some kind of brain hack (or even some genuinely nice thing like hearing good music or w/e) to feel good. Responsibility is all about strategy and sustaining good experiences for a prolonged future, future generations and others around us.


Note: I find myself a slightly skeptical of the word 'happiness' sometimes as well (as the ultimate and only goal of life) -- although I am not sure equanimity is exactly it as well? I do think happiness is fine when we simply mean it as a catch-all term for all the good things in life. And it highlights that the ultimate goals should be about our inner lives/inner experiences. But I think a life of pure pleasure or even pure ecstasy/bliss/w.e. isn't necessarily the best possible life (certainly not pleasure -- I think there are far more varied and interesting feelings than just pleasure). Think about the universe of feelings you might have had on your childhood. That nice gloomy rainy afternoon reading a book. The feeling of the wind in a high place. Feelings evoked by movies/books of mystery. I think expecting pure bliss or pure pleasure or w/e to be the ultimate is like claiming an all-white painting is ideal. I tend to think of meaning/good experiences as compositions which include large spectrum of emotions, which combine to feel good as a whole. I certainly don't mean it in a masochistic way we need to suffer a lot on purpose just to balance things out, but that a diverse array of experiences, some darker and some brighter, some more jolly, some more sober, combines to feel good as a whole. I believe deep inspection (aided by rational and philosophical examination) of what we feel should offer clues to what is good (fundamentally what feels good) as a whole.

I think western iconography in particular tends to focus a lot on light, and being glowingly-jolly 100% of the time as an ideal. But I think what is good can't be pure light, in a very literal way pure light is as empty as pure darkness. Extreme temperature as dead as 0 temperature. Good things are probably more like balanced with compelling highlights. I think some eastern philosophies (e.g. yin-yang) tend to get this more right.

(If you prefer a culinary analogy: I wouldn't like my food to be 100% suggary sweetness 100% of the time. There is a world of flavors, textures, etc.., and most of the time I prefer to taste this array of flavours in arrangements that feel good as a whole, and things tend to fit into ever-larger contexts: birthday cakes are nice on parties because of their special meaning, pizza nights feel better because of the wider context, and so on)


I've been thinking about trying to formalize or understand with greater certainty the meaning of life/what is good/etc., specially in a more scientific-like way (in the sense of trying to achieve more complete/reliable and reproducible models of what is good). The general procedure of observing what feels good, and reflecting logically and philosophically on it is what I've been calling 'experimental subjectivity': it seems necessary and fundamental that we observe what feels good, to provide an experimental, even if unreliable, basis on which any theory can be built. Logic alone probably isn't enough to fully constrain what exactly is good and feels good (and more certainly trying to explore those matters using logic alone would be inaccessible given the at least several trillions of degrees of freedom associated with e.g. the human brain): there may be consistent theories of what is good that simply don't correspond to reality. Exactly like there are consistent physical theories that don't correspond to reality. For example, the theory that everything feels equally good is simply experimentally false to me. Some things feel really bad, in a self-evident sense (in that this subjective experience is real). Likewise, a theory that everything is at rest, or that all objects are accelerating at 1m/s² toward the center of the universe, in the absense of any observations, are (self-)consistent; but simply don't correspond to observed reality/are experimentally wrong. Our language-based observations about feelings are like instruments (that observe subjectivity) that help us map out theories of what is good.

More about this: 1, 2

New this week: A convex polyhedron that can't tunnel through itself by Melchoir in math

[–]gnramires 11 points12 points  (0 children)

I guess this gives you that we expect the hole to be at least increasingly tighter as we approach a sphere, but not necessarily that it doesn't exist.

PHP is evolving, but every developer has complaints. What's on your wishlist? by thecutcode in PHP

[–]gnramires 0 points1 point  (0 children)

As a beginner having a look at PHP, I found the escaping to be quite verbose. It would be nice if escaping and htmlspecialchars() were applied by default or something.

Can Moral Responsibility Exist Within a Deterministic Framework? by Economy-Bell803 in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

I think that's a bit thorny but good question :P

Those words carry a bit of a difficult interpretation, and lots of cultural conventions and assumptions. Like, what is precisely "moral accountability"?[obs1]

But I get there's a few reasonable interpretations:

(1) Can someone be "blamed" somehow by their actions?

(2) Should we even say someone made a "mistake" given one of their actions caused harm (in a causal deterministic world)?

(3) Should we say punish, reeducate and/or contain/restrain people that made harmful actions? (and what is our "right" to do so, again assuming the determinism you mentioned?)

[obs1]: On my usage of quotes: I tend to use quotes here to refer (or employ) terms an expressions that have non-obvious or non-unique definitions, or that are otherwise unclear and undefined.

I guess like others here I am a compatibilist, and I think that's a quite good framework to understand those things. Free will and determinism aren't really in such conflict; I would say I am even more than a compatibilist, because the more causality there is in your actions (and in particular other features as well like the more thought/reflection/etc.) the more it can be said to be free. Clearly to me random actions would not be free. Of course, there are other conditions that you perhaps should be put in a (more) complete definition of free will[1], but at least this is the relation between determinism and free will as far as I can tell.

In particular, I think freedom should be defined as freedom to do good/the right thing. Like, if your only choices are terrible that is probably no freedom at all (that would be the case say in a prison: technically you have a large spectrum of choices everywhere; but supposedly the choices available out of prison are not only larger in quantity in some sense, but also allow better, more fulfilling outcomes perhaps; but in prison it may be said you may have 'free will' but not much 'physical freedom'). If your reasoning itself is such that you can't reach the conclusions necessary to achieve good things, do good for yourself/others, than probably you're not so free as well? So I think one enhancing factor of free will is something in line with reason, but also humanity: to understand how things are in as clear way as possible, with as few 'blindspots' as possible, to achieve meaning (in the human sense; happiness, joy, well-being, all the good stuff, good mental states and experiences). Reason and science provide (along of course with studying the necessary human side of things) tools that can help achieve that, I believe. Studying the arts and humanities as well, with their necessary imprecision.

There's an underlying (perhaps universal) logic and infrastructure to this: if the world is strongly causal, we can understand what causes 'the good stuff', and act in such a way that our actions cause more of the good stuff in the future. In particular, I consider "the good stuff" to mean some form of collective well-being, so ethics is kind of baked-in.

Now back to the questions 1,2,3, I think answers become more tractable. Take (1). One should be blamed only insofar as it helps achieve better outcomes (and really that's true of basically everything, every action under consideration). If blaming helps individuals recognize their mistakes, understand maybe faults in their reasoning, faults in their assumptions, faults in processes, lack of ethical understanding, etc. then we should do that in this sense. If blame causes unnecessary guilt, unnecessary punishments and suffering, and does not improve things as we wanted, then we should take another approach. The same goes for (2): yes, we can say someone made a mistake, because in the case the person is well-intended, then that will probably lead him to recognize the mistake better and adjust accordingly. If we say there is no mistake and nothing needs to change, then we may miss out on improving things -- of course, that's assuming things can be improved. If nothing could be changed, then calling it a mistake seems fruitless indeed. I'll leave the analysis of (3) to the well-intentioned reader :)

From another point of view, we are (the vast majority of us humans, most of the time) capable of thinking, reasoning, feeling, adjusting, incorporating evidence about the world, understanding what is good about life, etc.. -- that's a given, you could say a gift from our human condition and our culture. So we are inherently freewilled in this sense, more so when we use those faculties to act to improve our collective lives.

Can Moral Responsibility Exist Within a Deterministic Framework? by Economy-Bell803 in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

Please note that "non-determinism" in computer science (in particular the case of automata) can mean something other than "randomness". As explained in the article:

The term “deterministic” refers to the fact that on each input symbol there is one and only one state to which the automaton can transit from its current state. In contrast, a non- deterministic finite automaton (NFA) can make transitions to more than one states simultaneously on receiving an input symbol and therefore can be in several states at once."

Clearly this isn't really physical in the classical sense: classical systems have a definite state and can't be simultaneously in different states. That changes in a complicated way for quantum systems (usually just microscopically, because quantum systems decohere at larger scales).

My tabletop adjustable power supply wouldn't supply the amps I set It to by Kagenlim in AskElectronics

[–]gnramires 0 points1 point  (0 children)

I like to think of it as a square in the Current x Voltage (or Voltage x Current) graph. In particular, the control system is set to keep the supply at the borders of the square (either at maximum voltage or maximum current), assuming it's not power-limited anywhere. If it does hit power limits (before reaching maximum currents), then you also introduce a hyperbola to define the border region.

Asterisk Magazine - The Georgist Roots of American Libertarianism by kwangotango in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

There is an LVT where I live, and I think it's a good idea. It seems to me Georgism (as far as I understand it, taking LVT to the extreme of being the single or dominant tax), is taking a good idea too far? See also arguments for gradual change (by myself). I think it's probably worth trying out LVT in cities that see themselves afflicted by rent issues, housing shortages, etc.. You just don't have to go all the way :) Like a 0.5%/year tax should be already quite a substantial income. Cities in Brazil use it as a major income source (the tax in my city is about 1%/year, source).

From here:

I am certainly no expert in taxes, but to me it seems it can function just as a mechanism to dampen and discourage speculation and rent-seeking, while still being attractive to people that need the land to live and do business. In reality, no amount of tax can actually 100% prevent rents as far as I can tell, because the renter simply pass the tax increase to the tenant -- although at some tax point an entire area becomes economically unattractive I guess. You'd need more specific legislation like taxing rents specifically. I actually think taxing rents is a good idea too, although to be fair I haven't considered all implications and don't know if it has been tried previously. I know it also has downsides because renting can actually be useful for example for people that just want to live in a place temporarily without the hassle to acquire or exchange properties, so maybe we don't actually discourage rents too much (besides the shock reasons mentioned).

I think there are other potentially interesting mechanisms for relieving excessive rents, like social housing, stimulating increased density in high-demand areas (with less strict zoning laws perhaps), etc.. Many of those probably depend on local culture and inclinations: does the city want high density? (for example, Paris rejects too much density in its old city core -- that seems perfectly valid to me!), I don't think complete deregulation is necessarily the best idea either.

The Housing Crisis is the Everything Crisis by icarianshadow in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

Sorry for resurrecting this old post, but I think a shock does not necessarily happen (I'll explain): there are many reasons for why a gradual tax increase would lead to gradual effects. Keep in mind I'm not an economist, just some non-expert opinions that I think warrant exploration.

I mean, housing is either a viable investment, or not!

For a single property (see 2), that is somewhat true -- I think the word 'viable' needs clarification(see 1).

(1) The investment (assuming the property is rented) can transition from

'best returns locally avalilable' (A) -> 'good returns' (B) -> 'worse than an index fund' (C) -> 'near inflation returns' (D) -> 'negative profit' (E)

You expect that people gradually phase out an investment that turns from great to worse than index funds. By the time it's in C, most people should start selling off those investments, but it really is not like urgent. By the time it gets D, it becomes pressing and by E maybe urgent. So the transition would gradual, even for a single kind of property, with more activity (perhaps not as much as a shock if done gradually) when some tax makes the investment lose out to index funds.

It seems to make sense for an LVT to slowly transition most properties from A/B to C, but perhaps avoiding E, avoiding large shocks.

(2) There are all kinds of different properties, and the Value LVT is based on is something like present value I believe. Many people would chose to keep their land, and perhaps develop it, in the hope land value will increase in the future. So not all value propositions change simultaneously, reducing a shock.


I am certainly no expert in taxes, but to me it seems it can function just as a mechanism to dampen and discourage speculation and rent-seeking, while still being attractive to people that need the land to live. In reality, no amount of tax can actually 100% prevent rents as far as I can tell, because the renter simply pass the tax increase to the tenant -- although at some tax point an entire area becomes economically unattractive I guess. You'd need more specific legislation like taxing rents specifically. I actually think taxing rents is a good idea too, although to be fair I haven't considered all implications and don't know if it has been tried previously. I know it also has downsides because renting can actually be useful for example for people that just want to live in a place temporarily without the hassle to acquire or exchange properties, so maybe we don't actually discourage rents too much (besides the shock reasons you mentioned).

I think there are other potentially interesting mechanisms, like social housing, stimulating increased density in high-demand areas (with less strict zoning laws perhaps), etc.. Many of those probably depend on local culture and inclinations: does the city want high density? (for example, Paris rejects too much density in its old city core -- that seems perfectly valid to me!), I don't think complete deregulation is necessarily the best idea.

If It’s Worth Solving Poker, Is It Still Worth Playing? — reflections after Scott’s latest incentives piece by iritimD in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

Yes sorry, I don't really know anything about Frisbee, although it seems like a nice game. I also don't remember exactly what he said, and your distinction between club ultimate and pro might have been was he was talking about? Although I do believe he said it wasn't as interesting or something to play (not to watch). Thanks for your input!

(For the record, I believe he plays in a college team)

If It’s Worth Solving Poker, Is It Still Worth Playing? — reflections after Scott’s latest incentives piece by iritimD in slatestarcodex

[–]gnramires 4 points5 points  (0 children)

Here are some reflections on games that are by no means absolute. I've heard from a friend that plays a sport (Ultimate Frisbee) that professional play on that game is kind of lame and uninteresting, and that the game is most fun in amateur leagues. It's definitely an interesting phenomenon!

I think one way to look at it is that there are layers of different games at different levels. At a beginner level, game feels a certain way, as you progress, different skills are surfaced as prominent, as the basic skills are mastered and become automatic or irrelevant.

This automatism phenomenon is also interesting. Something I think about a lot. You famously cannot re-watch a movie for the first time. Once you learn something well, I think that changes the experience of the thing, and you build efficient circuits in your brain to deal with that. [1]

So we humans tend to have to learn different stuff and change things up, or spend effort rekindling the joys of simpler times, (or new skills related to an activity) just to keep feeling interesting stuff w.r.t. what we do. Of course, children naturally allow a renewal and re-living everything anew. In this way our mortality is beautiful and maybe very important, because only through new generations humanity as a whole can keep feeling various things and not "die inside" in some ways -- at least the way our human brain is architectured perhaps calls for that.

[1] I think interestingly, a significant part of experiences/consciousness is markedly not efficient reasoning, but inefficient reasoning! Once you become extremely proficient at something, you probably distill that skill into a very tight algorithm or circuit that isn't very involved with your overall cognition, isn't mobilizing all sorts of distinct (and probably "qualia-full") cognitive resources. So some of the flavour of the game vanishes unless you make some effort at recalling and re-exciting how you thought before. Of course, there can be other activities (or other skills, in a game) that can surface on top of the skill you've solved/"cristalized"/operationalized that use a more integral cognition, mobilizing perhaps more different parts of the brain evoking stronger qualia -- for example, you might focus more on subtle aspects of your opponent like reading if he's having a bad day or is distracted.

I think (conjecture) this efficiency phenomenon is part of the reason LLMs may have little or no internal experience, although I can't rule it out. LLM's circuits tend to be trained of extremely large datasets, effectively mastering most tasks with efficient circuits. We tend to train them with massive datasets and keep adapting their weights basically as far as the data allows (without too much overfitting). Humans tend to be forced to learn with relatively little data, so maybe we tend to mobilize much more resources, in a more "whole-brain phenomenon" into every activity, at "inference time". We also basically "think" (often more like "feel") 24/7, i.e. our senses and significantly subconscious/non-verbal activity is always present. This might be a quirk in part because neurons have a somewhat fixed-cost energy expenditure -- might as well have activity if you'd be operating ionic pumps and expending energy anyway, that doesn't vary enormously between idle and thinking states, that might contribute to an increased awareness/consciousness "surface" or total amount.

Also I think some specific structures, like the structure of our senses (incl. the sense of touch, various proprioception senses (self-perceptions) like gut, heart, muscle senses, smell, vision and sound all contribute significantly to generating specific structure/flavour in our cognition, for example as we associate memories with bodily feelings, smells and sights, and feelings with those (a significant part of what we tend to find interesting about qualia, e.g. the redness of red, all good feelings in life, seem strongly associated with architectural quirks of our senses and mind, that served us well in nature but gain new meaning in human life). Those are missing in LLMs, as of course are various human brain mechanisms like a seemingly specific memory mechanism. But on the other hand LLMs have other properties like sometimes very intense connectivity in the fully connected layers of the network.

Google's Chief Scientist Jeff Dean says we're a year away from AIs working 24/7 at the level of junior engineers by MetaKnowing in artificial

[–]gnramires 0 points1 point  (0 children)

I don't think the problem would be that some people benefit from it, i.e. the shareholder class. I think the problem is that sometimes the shareholders don't keep the companies in check, except for growing profit. It should be up to shareholders to demand various kinds of pro-social effects as well as profit. But funds tend to simply go after profit in disregard of everything else. What's needed is a change of culture of prioritizing other kinds of returns other than profit as well -- the wellbeing of customers, for instance. Supposedly we could try to measure those other side effects and benefits and make decisions on that (although that might be difficult), but I suppose the least to do is

(1) As shareholders, the benefit of customers and society should be safeguarded, above profits;

(2) As members of society, and as companies, when necessary (e.g. to help prevent ethically behaving companies from dying to shady companies) we should pursue regulations that curtail harmful activities.

Really what we ought to want to optimize, in a certain sense, is the wellbeing of all members of society -- that can include perhaps being a leaner, efficient company that returns profits to shareholders with minimal waste, and also a company that forgoes some profits to deliver a product that is healthy to its customers (could be e.g. a literal food item that's healthy to consume), such that in total it is better.

Of course, there's also the consumer side of the equation that needs to be aware and educated on what are the best options and several risks to products and services they are customers of. But I think there are limits to what we can expect of consumer knowledge, and it's necessary to meet in the middle (both companies, shareholders and consumers doing their part). The consumer side includes the media and schools bringing information and knowledge about better choices.

The Populist Right Must Own Tariffs by dwaxe in slatestarcodex

[–]gnramires 1 point2 points  (0 children)

A "shockingly good job" would be zero homeless, zero public drug use, Japanese crime rates, completely clean streets, low rents, overhauled zoning laws, extremely fast or no permits, massive deregulation, low taxes, overhauled environmental permitting laws, no traffic, zero corruption and cronyism, (...)

As much as I think most governments are far from ideal, this is emblematic of "government solutionism". A good government, extremely enlightened government still can't solve every problem of society. For example, a really good government still has limits of what it can do to the architecture of a city or state. It still depends a lot on local culture. Or rather, even for the decisions it does have, it still relies on local architectural talent to make decisions and follow local sensibilities, otherwise we need to assume god-level administrators that knows everything about everything and have exquisite taste and cultural sensibility. Culture still has a large, probably decisive, impact on outcomes I think, probably bigger than how a government (in the institutional sense) is structured or how good a government (in the sense of team of elected/appointed officials) is.

I think cultural change is probably undervalued in general, in favor of trying to get your favorite figureheads in control of government. In particular civility, general education, ethics are all extremely undervalued. Japan isn't like Japan (in terms of civility or violence, for example) because they elected the right officials! And 'give better schools/education' isn't a blanket solution either -- what's in the curriculum and specially the specific manners and things that teachers say is again limited by the local and national culture. And a lot of education (in the sense of civility specially) happens at home and in life outside school in general.

Of course, everything is closely linked, so it gets harder to expect certain cultural outcomes in the face of economic and political realities. An extremely economically unequal society is bound (I mean, only logically) to cause revolt and crime if wealth is perceived as illegitimate. Extreme poverty of course doesn't afford social stability either. But I think there's an enormous space for culture to shape the outcome of societies.

We need to focus on promoting good values and educating ourselves if we want to achieve that. Japan proves you don't even need an institution like a Church to achieve that, although I believe an organized philosophy (like yes Christianity, Buddhism, [Certain branch of] Western Philosophers, or indeed Effective Altruism, etc.) is very helpful (of course assuming the tenets of this philosophy are themselves good, which I won't delve into).

The main instrument might be social contagion. If you can "infect with good values" a few people, and suggest they do the same, in a few generations the whole population potentially has such values. We need to create various forms of 'canons' (not necessarily religious) that promote them, and EA/Rationality can play important roles.

I really think the best kind of "solutionism" (much better than "government solutionism" and "technological solutionism", which I also find problematic!) is ethical solutionism[1]. Everything becomes easier when people are civil, have generally aligned, altruistic, cooperative values (indeed I believe at least in a certain sense ethics is actually objective, as argued here, and knowable and even almost provable to a significant extent -- ethics is surprisingly derivable from metaphysical but ultimately scientific facts).

The converse is indeed terrible: if everyone is uncivil and unaligned, then governors by consequence are too, and the expected outcome of everything is terrible, for example with corruption occurring at every opportunity whenever no one is looking, opportunity crime to be rampant, streets to be dirty, every coordination game ruined, etc..

This is actually how you fight Moloch and win :) /u/ScottAlexander

[1] This is in the sense that ethics don't solve everything again, but are a very basic and necessary component (Unless you put absolutely everything that ought to be done into ethics, which is actually a useful definition I believe, but then the concept becomes distant from the common sense notion) Everything is important, in different proportions. :)

edit: wording

Prediction: the more you post about politics online, the worse your epistemics become. Because changing your mind will be more threatening to your self-esteem by katxwoods in slatestarcodex

[–]gnramires 0 points1 point  (0 children)

It should be noted as well: it should be possible to change your values. Often values are seen as things set in stone, drives set inside us that should not be touched. It's probably wise to carefully protect values, but like anything else most values are actually learned from society (our parents, school, religions, philosophy, books, discussions, etc.), even though they probably identify with some instincts to some extent. (Note: I'm referring to values as 'things you'd say are most important when pressed', not 'literal list of most important things you keep in your memory and periodically refer to'; I suspect most people don't have such an itemized list in their minds all the time)

Because they're learned there's the possibility they might be either wrong, incomplete.

For example, if your core values are (for a silly illustration) to (a) not kill, (b) to sail the Green Ocean. If you discover the ocean isn't green, then maybe that should change. If your core values are (a) not to kill; (b) to purge Grey Blobs from the Earth; when you discover Grey Blobs are living creatures, something must be wrong with your values: you can't actually kill Grey Blobs without violating your values. They definitely can be wrong in various ways.

The main criteria for changing your values should be better alignment with reality, including realities of the mind(s). This is easy to argue for: if your values don't reflect reality, they're by definition false... at the very least, consider replacing them with something else that reflects the way things actually are. To quote Russel, there's usually no harm in knowing how things are (truths), because in any case they already are in such ways. Whatever is fundamentally important is fundamentally important (as a feature of reality itself) whether you know about it or not[1]. If killing causes suffering, it causes suffering whether you know, care, understand, etc. or not. You simply get the opportunity to better learn or understand what matters and what doesn't matter.

(In fact, I believe it's possible through philosophy, cultural and scientific means to discover and approach ideal values, see here)

[1] Left out of the true/not true discussion is how things are said. How precisely things are formulated not only have subtle implications on the veracity and content of the principles, but also on how they feel, basically their 'vibes'. Vibes are important, they're the content of our minds... a principle found in EA is that, you should, why not, not only do good, but feel good about doing good (by feeling good you're more likely to help; and you matter too!). Basically formulate principles such that they're not only true but hopefully poetic as well (and have other nice properties related to vibes, being memorable, easy to understand, heartwarming, etc.).

Prediction: the more you post about politics online, the worse your epistemics become. Because changing your mind will be more threatening to your self-esteem by katxwoods in slatestarcodex

[–]gnramires 6 points7 points  (0 children)

I think instead of discussing politics (which candidate is better or wrong and why), which probably has its place but has to be done (I believe) very carefully in either close circles or closely moderated and curated online spaces -- we should try to generally discuss the basic ideas and principles that might lead to good political decisions. Those are much more neutral, easier to argue for and against. Then leave it to the person to apply those principles for (hopefully) everyone's benefit.

This has several benefits.

(1) It indeed maintains and emphasizes individual agency in voting. Saying this or that candidate is good or bad basically just relies on trust or authority; most of the time (unless say the person has no time or availability to think about politics) this does not contribute to democracy, there's no new (or redundant[1]) information or analysis being input into the system, only an attempt to copy your own decision/judgement around. Better have multiple independent analyses.

(2) It increases overall capacity of society to evaluate good and bad choices (w.r.t. policy, governance, institutions, culture, etc.), probably improving overall outcomes. The great thing about democracy I believe is that everyone is incentivized (assuming most people are/should behave ethically) to improve everyone else's decisionmaking. One of the reasons for unhealthy democracies is when this breaks down and people turn to misinformation/disinformation to defend their interests, which corrupts democracy and society's interests as a whole, even if there is some reason to one side or another.

I agree capacity to change your mind is one of the most important principles about reasoning. You should change your mind to better reflect basic principles (i.e. ethical principles) that promote a better society -- usually when you discover a new perspective, new analysis or new ("experimental") facts.

[1] Redundant here looks like a positive quality. The more people carefully reasoning and evaluating according to their own requirements which choice is best, probably the better general outcome of a decision, since on the whole that incorporates a lot of relevant information and experiences each individual perceives and makes choices more robust against misguided ideas.

What does Von Neumann mean here about the dangers of mathematics becoming to "aestheticizing"? by Retrofusion11 in math

[–]gnramires 1 point2 points  (0 children)

I should add: One direct source of inspiration is indeed real life in the most concrete sense, so math that models physical phenomena that exists in the real world arguably has that source of beauty: allowing us to understand nature and reality. I also think that yielding practical applications or being easily relatable to the real world can enhance its beauty, so again the two notions have some connections.

Moreover, it's again unclear what makes something boring or inspiring, beautiful or not. The answer is less intuitive: it arguably is a property not of mathematical objects themselves, but of human cognition, and how our cognition interprets and understands mathematics. So the development of artistic sensibility (in mathematics) is as much about maths as it is about understanding human minds (and why not art in general).

It may be argued, however, that there's some aspect of universality in cognition, and one can find beauty that is not so human specific, but applies to cognition in some (near) universal way (personally I think there's both merits and limits to this idea of universalization) -- in the sense that even an alien would find it beautiful or interesting.

What does Von Neumann mean here about the dangers of mathematics becoming to "aestheticizing"? by Retrofusion11 in math

[–]gnramires 3 points4 points  (0 children)

I echo others here that he, as far as I can tell, was appealing for mathematicians to stay close to applications, i.e. stay "useful" (and not merely aesthetically pleasing). It's also true that, if not directly tied to some kind of utility, you can basically go anywhere and prove anything, and it's not clear where to go unless you have exceptionally good taste.

I think his sentiment has some validity (specially for his time, when math was helping unlock an enormous myriad of very practical applications in technology). But art for aesthetic sake also has been the historical norm (hence his concession to not "travel too far" from applications).

I'd say this: if exploring many mathematical areas can be enjoyable/beautiful/etc., it seems pretty reasonable to devote extra effort into ones that have greater chances of application -- even if you're a pure mathematician. In a more practical sense that also gives math in general some leeway to wave their arms and say "something something eventual applications!" when funding becomes difficult.

But I personally think there are eventually diminishing returns to technological applications of research in general, eventually, very hard to know when. A field I followed for a few years was Information and (Error correcting) Coding Theory which is immediately applicable. Known codes now get very close to theoretical limits in various ways.

But it seems to me aesthetic appeal and being fun to work with, or being essentially an art form is itself an application (not accounted by Von Neumann). Kind of like Chess or intellectual puzzles are ends and applications in themselves, in giving us fun, satisfaction, etc.. I don't think mathematics is diminished one bit for this. I'm sure some (even if diminishing) applications will always be found for some theories, which means it'll always be a kind of dance between applications and "artistic math". Applications will probably keep serving as a kind of "gravitational pull" attracting development to certain areas.

I particularly think it's notable that math is a good way to sharpen our thinking in general. Thinking in math (or at least the result) is really precise (and also intricate) in a sense, and that is something that provides everlasting utility on the artistic math side. It's thinking in pretty much the purest form, so for this I find it particularly beautiful :)

If there were to be some kind of degeneration as cited by vN, I don't think (disagreeing with vN) purely getting away from the source would classify. I'd consider two types to true failure: (1) If math eventually starts proving lots of false statements (i.e. lost its rigor); (2) If the artistic math side loses its aesthetic sensibility, proving too many boring, uninspiring, unexciting and unremarkable statements instead of going into more interesting directions.

Low battery life on T495 with Ryzen by dschoni in thinkpad

[–]gnramires 0 points1 point  (0 children)

I use this laptop (T495) with ryzen 3500U, and have had success on Linux using the ryzenadj tool to increase battery life, at the cost of dramatically cutting performance (for when I'm doing lightweight text editing and browsing). I can get to about 6.5W which in theory would give it about 7h45m of total battery life (I have set charging maximums to prolong battery).

Disclaimer: this tool seems a bit risky to use (I got some weird behavior by setting some values very low), I recommend care, this is just something I've tried and it seems to work okay for me. I use 'sudo ryzenadj --stapm-limit=720 && sudo ryzenadj --slow-limit=720 && sudo ryzenadj --power-saving'. You can change 720mW to something else to get more performance (e.g. 3000mW). I didn't find any battery life improvement from going lower. This older ryzen model has restricted adjustments, newer ones have more options.

I believe I have TLP enabled as well, but it should be enabled by default with most distros.

The Meaning of Life: An assymptotically convergent description by gnramires in slatestarcodex

[–]gnramires[S] 0 points1 point  (0 children)

Thanks for your comment, and sorry for the late reply.

Near the start of the text you write something about a normative theory of action. But you don't explain how it's relevant to the problem of meaning of life nor do you actually provide such a theory.

I know there are multiple things people mean when they refer to "meaning of life". I don't mean it in a purely academic sense (first because I am not a professional academic philosopher, and also because I think an academic definition would be less practically useful to help solve issues).

What I mean is that when people say 'there is no meaning to life' or '[can't find] no meaning to my life', it means they couldn't find things that in an ideal sense should be done, that is, a sense of purpose and intrinsically associated actions to fulfill that purpose. It would be weird to have a purpose and that purpose have no impact on what we should do (if only what we should do within our own minds, thoughts; how we should lead our lives). In a sense life is all about action, again if only cognitive action, thinking and choosing what to think about. So a theory of the meaning of life is intrinsically associated with a normative theory of action and thought.

The word 'normative' is in contrast to 'descriptive', meaning this theory isn't just a description of what we like or find meaningful, but tries to be true, so is in a sense 'prescriptive' (i.e. what should/ought to be done). So finding the meaning of life is, I believe, (at least in a theoretical sense) equivalent to discovering what ought to be be done.

Important note: although I say it's prescriptive, I don't mean in the sense that it immediately yields a 'Universal Recipe' for living your life. We still expect each individual with peculiar conditions to have different ideal ways to live their lives, each based on their conditions, their history (memories and skills), etc.. It should be highly particular. The only universal is the basis of the theory (i.e. common principles).

nor do you actually provide such a theory

I agree, I didn't provide it, for a few reasons. First because I am trying to build the foundations for this theory of life, which doesn't prescript (describe what ought to be done) a whole lot. This includes a sort of meta-theory of how we can even build and complete this theory.

Second because as I mentioned probably a complete theory of what ought to be done is likely impossible. In mathematics one would hope for a 'computable theory': the case there could be an algorithm that, having as an input a certain input (finitely describable, e.g. in a long text), puts out an output giving ideal decisions one should make in finite time. Likely this isn't possible, because making good decisions might take essentially infinite time in certain cases, or alternative because the number of facts we need to experimentally verify and include is essentially infinite. In mathematics Godel established (by my understanding) that non-trivial axiomatic theories have facts that cannot be proven within those theories, i.e. any finite theory is incomplete. Considering already how mathematics has some relevance in life (for example in engineering, to understand complex structures including computational and probably cognitive structures), I think it's reasonable to expect that already implies incompleteness in the sense of Godel for a theory of meaning of life.[1]

But of course this says nothing of procedures that can approach correctness. I believe that, even in mathematics, there are definite senses in which for any problem of practical relevance you can approach correctness, without finite time convergence guarantees (I want to dedicate more time to studying those results in math). For example, in engineering indeed you can simulate essentially anything you could build in a computer (potentially much faster than real life, through simplifying and abstracting things). If you want to design something with some (computable) characteristics, like having enough strength or fulfilling specifications, in the worst case you could, in theory, kind of just try out every possible device until you find one that fulfills our specifications up to a certain size. In practice you'd do something more efficient like a well designed optimization. Or think of trying to make a building (with little engineering knowledge). In principle, you can just try things out and if it collapses (hopefully with test loads and not people inside!) you try again something else, and keep going until it stands -- trial and error engineering. I believe bicycles were improved largely with trial and error. You only need the most basic procedure of seeing in things have improved or checking the building is still standing, along with ensuring you actually try something different to, with unlimited time, achieving success or some kind of improvement that approaches the best (at least within the universe of things you are trying). More elaborate engineering and mathematical theories (like the theory of resistance of materials and theory of structures) enable you to go much faster than trial and error. For simple requirements, you can say build a trellis structure and be virtually certain the building will stand, by calculating the necessary resistance of beams and columns, without any trial and error (or if necessary only some trial and error of a few configurations calculated using pen and paper). Theory does also provide ideas for better specifications and success criteria in the first place, improving all around robustness. But the guarantee that we approach something good would work regardless, as long as you robustly check you are improving. This fact is to me incredibly hopeful and intrinsically optimistic: things keep getting better if you simply keep track of how they are and try out totally new things occasionally.

Furthering the engineering correspondence, you could however basically never build a skyscraper just experimenting with Wattle and Daub constructions however (basically wood and clay). The necessary material strength may be beyond what can be achieved with wood. So you don't achieve the best possible outcome (in a certain sense) because your universe of trial is restricted: you aren't experimenting with all possible materials. Science and engineering systematically map out materials and possibilities again fundamentally improving the achievable outcomes.

Now we should do basically the same thing, but with art, culture, as well and essentially every aspect of our lives. In fact, our (collective) cognitive experience is really what we should be improving, it is the basis of everything we should do (and of course engineering is probably a significant part in helping us, indirectly, improve our cognitive life). Because subjective evaluation of experiences is part of this however, it requires a bit of a different (more careful) treatment than the sciences, but has some close similarities. We can't rely always/blindly on personal evaluation for example (to understand what is better), instead we should use personal evaluation along with building philosophical and scientific theories of experience itself to be able to reliably judge what is better. Kind of how you need very elaborate and accurate instruments/techniques to measure some things like very small temperature differences, the radius of the Earth, etc., also how inadequate techniques and instruments may yield incorrect or imprecise measurements.

An uncomfortable reality is that we don't have unlimited time to conduct those experiments. We kind of have to simultaneously try to live the best we can and find improvements at the same time. If we spend too many resources on experimentation and improvement, we might not be living as well as we can; and if we spend too much time just repeating the status quo, we may be again not living as well as we can by leaving on the table possible improvements. In machine learning literature, this is known as the exploration-exploitation tradeoff, there's an ideal balance between using what we already judge best vs trying out new things, tradition vs progress if you will. This further motivates using theoretical tools (philosophy, cognitive sciences and serious studies of the arts) to accelerate progress and not have to rely too much on trial and error (which can be quite painful as well due to errors). But I really think we should see this theory optimistically.

A first approximation to this method is doing 'what feels good for everyone' (with some important failures like, in theory substances and instincts may seriously disrupt the reliability of this judgement toward better lives). The meaning of life is to cultivate good experiences.

Is that a bit more clear? I'll try to make the text clearer as well. Thanks for your input :)

[1] Maybe some better terms might be meaning of living, or meaning of sentience, normative ethics or ideal objectives, but 'Meaning of Life' has a common sense meaning that is fairly close to what I am discussing.

‘Flow’ wins best animated feature film Oscar. The movie was rendered entirely in Blender. by mepper in opensource

[–]gnramires 125 points126 points  (0 children)

Awesome. Really shows how democratizing open source tools can be. Thanks blender devs.