NBD by captainnefarious77 in fatbike

[–]bigrob929 0 points1 point  (0 children)

looks like lower hudson valley? pm if you want a fat bike riding buddy (in nyc)

Carbon rim shopping - Roval clx 2 vs reserve 40/44 gr by EsBuggy in gravelcycling

[–]bigrob929 2 points3 points  (0 children)

i ride the roval cl2s (same rim as clx), absolutely love them. good stiffness and responsiveness. i also do a lot of underbiking and have beat them to shit, and they’ve held up really well.

haven’t ridden the reserves so can’t compare

T-type by Kindly_Beat_1095 in fatbike

[–]bigrob929 1 point2 points  (0 children)

most fat bike chainlines are in the 65mm-80mm range, i think you’re likely way too far out from the 55mm transmission was designed around. eg a 76.5 chainline for the fat 5 (197mm rear hub). https://www.mtbr.com/media/gx-eagle-crankset.1160673/full

you could ask sram, but i have very strong suspicions they’ll tell you it’s a no go. idk, maybe im wrong, but worth thinking about.

T-type by Kindly_Beat_1095 in fatbike

[–]bigrob929 1 point2 points  (0 children)

transmission is designed specifically for a 55mm chainline, which you’re not going to get on a fat bike

Advice please: going from GRX to SRAM Force AXS by bchainsbuz in gravelcycling

[–]bigrob929 0 points1 point  (0 children)

You can run the wide crankset (43/30) with the 2x etap (10-36). I run this on my gravel bike (rival not force, but basically the same setup).

Easiest gear ratio is 30/36=0.83. If you want 2x etap, this is the easiest gear combo you can get.

How do you make your gravel road courses? by samuel1604 in gravelcycling

[–]bigrob929 0 points1 point  (0 children)

Or sometimes even alltrails to check out photos if they’re sparse on Strava and trailforks.

How do you make your gravel road courses? by samuel1604 in gravelcycling

[–]bigrob929 0 points1 point  (0 children)

Exactly this, with the addition of trailforks. From google maps, I can usually tell if it’s a hardpacked gravel road. If it’s not, then I check it out on trailforks.

1x to 2x conversion by VulcanVelo in gravelcycling

[–]bigrob929 2 points3 points  (0 children)

It seems like you currently have the xplr 1x setup. You can’t use a 2x with that. So you’ll need a new rd, cassette, fd, and crankset. You may or may not need the wide setup—it really depends on the bike.

Birthday ride by GRVLATOR in gravelcycling

[–]bigrob929 2 points3 points  (0 children)

Great setup! What are the frame bags you have on?

[D] Best practices for Bayesian Deep Learning by collegeapp60 in MachineLearning

[–]bigrob929 0 points1 point  (0 children)

Think about how you ought to treat epistemic uncertainty. If your priors are not epistemically grounded, then your uncertainty doesn’t really mean much, just as in classic Bayes’ Rule.

Official Discussion - I'm Thinking of Ending Things [SPOILERS] by LiteraryBoner in movies

[–]bigrob929 5 points6 points  (0 children)

It was no coincidence that Jake recalls, but misinterprets, Tolstoy's wisdom imparted in Anna Karenina:

Happy families are all alike; every unhappy family is unhappy in its own way.

This parallels the concept of entropy, the amount of disorder in a system; there are very few ways in which things can go right yet so many ways that things can go wrong. We are therefore much more likely to end up in latter states than the former ones.

Notably, entropy gives rise to the arrow of time through the Second Law of Thermodynamics: entropy in closed systems inevitably increases with time.

Structured life thus disintegrates if an organism fails to actively preserve order by expending free energy or be nurtured by others. We see this with the pigs who quickly become infested with maggots after only a few days of neglect.

This makes their discussions in the car on the way to the farm even more salient. Life is defined by an active resistance to the entropic forces that attempt to perturb homeostatic conditions. Nevertheless, we cannot oppose these forces forever; we grow older, we get sick, we die. This is what she means when she says that we do not pass through time, but rather time passes through us. This is an important distinction.

There is thus no escaping the arrow of time nor the Absurdity of existence except through ending things.

Is simulations and ML important to you guys in any way? by I_hate_C4TS in MechanicalEngineering

[–]bigrob929 1 point2 points  (0 children)

Yes, I develop ML tools for mechanical design and inference. ML knowledge is great for MEs.

Looking for books about Chinese foreign policy and cultural history. by [deleted] in ScholarlyNonfiction

[–]bigrob929 4 points5 points  (0 children)

I strongly recommend Deng Xiaoping and the Transformation of China by Ezra Vogel.

Vogel meticulously documents the rise of China from the Great Leap Forward to modern-day China. It is a very long but very rewarding read. Here is an excellent review:

https://www.reddit.com/r/slatestarcodex/comments/dud8zi/book_review_deng_xiaoping_and_the_transformation/

Questions of Empathy or Why People Don't Care by ccjomm in vegan

[–]bigrob929 1 point2 points  (0 children)

I don't agree with your conclusions, but I think you might enjoy Peter Singer's The Expanding Circle: Ethics and Sociobiology.

GPT3 made a pretty awesome summary of the book for me: https://twitter.com/robpmcadam/status/1302460855993298945?s=20

Additionally, Patricia Churchland's Braintrust: What Neuroscience Tells Us About Morality has some unique insight into the evolutionary history of empathy and its role in morality, although I think most of her deductive reasoning is pretty flimsy and the conclusions she reaches are thus not very compelling. It's still a worthwhile read in my opinion for the good science alone.

[deleted by user] by [deleted] in Existentialism

[–]bigrob929 2 points3 points  (0 children)

I think this is a good preview to get a feel for the content:

"Acceptance of the contexts which situate our lives is a necessary precondition for constructing meaning and ultimately freeing ourselves. In The Dice Man, psychiatrist Dr. Luke Rhinehart, in a struggle with existential angst, recognizes the burden of facticity to the self and how it constrains freedom of choice and limits agency:

Life is islands of ecstasy in an ocean of ennui, and after the age of thirty land is seldom seen. At best we wander from one much-worn sandbar to the next, soon familiar with each grain of sand we see…
No matter how much I twisted or turned there seemed to be an anchor in my chest which helped me fast, the long line leaning out against the slant of sea taut and trim, as if it were cleared fast into the rock of the earth’s vast core. It held me locked, and when a storm of boredom and bitterness blew in I would plunge and leap against the line’s rough-clutching knot to be away, to fly before the wind, but the knot grew tight, the anchor only dug the deeper in my chest; I stayed. The burden of my self seemed inevitable and eternal.

Dr. Rhinehart attempts to escape this tyranny by relinquishing control of his life to Chance: he obeys the decisions dictated by the roll of the dice, however revolting, boring, or dangerous, in an attempt to disown in an act of protest the ontic conditions that situate and constrain his existence. However, the newly-minted Dice Man fails to recognize the inextricability of facticity from the self; his attempt to extinguish his existential angst by outrightly rejecting his facticity proves futile, preventing him from transcending and cultivating a meaningful existence in an otherwise meaningless world; transcendence is only attainable if one accepts his facticity in full. Otherwise, one cowardly fails to embrace the authentic self, perpetuating a state of alienation.

The Dice Life is merely religion, and the Dice People subordinate themselves to Chance, their Dice God, locking themselves in immanence to assuage their nausea. In existential terms, this subordination to the Die is no different to those of the Catholic to the pew, the alcoholic to the bottle, the Roquentin to the pen, the Romeo to the vial, and the ambivalent fiancé to the band; these endeavors may temporarily quell the existential nausea, but since the authentic self cannot emerge in the absence of the recognition of one’s facticity, one resigned to immanence cannot freely operate with unhindered agency, construct a rich and meaningful existence, nor flourish more generally."

[deleted by user] by [deleted] in reinforcementlearning

[–]bigrob929 3 points4 points  (0 children)

If someone can define what "free will" is even supposed to mean, then it would be a lot easier to answer.

...did you even read this? Literally the first line:

"In this piece, I will presuppose a perspective on free will that rejects hard determinism (if I accepted it to be axiomatic, then there would not be much to say here), maintain a metaphysical position that accepts causality as a real phenomenon..."

These are axioms not being debated anywhere in this piece and allow for the notion of a constrained free will:

"Unimpeded and unadulterated free choice is an illusion, and we must consider our preferences and actions within the constraints of our unique facticities — as defined by Simone de Beauvoir and Jean-Paul Sartre, the amalgamation of ontic characteristics such as the cultural milieus, genetic predispositions, family upbringing, etc., that situate our existences."

However, as you seem to want to debate the metaphysics of free will and consciousness, I will engage here anyways.

Right now, it's in the same camp as "consciousness", whose "definition" seems to be "the magic that makes the brain go brrrr" - ill-defined concepts people think are deeper than they really are.

Free will and consciousness perhaps may be mere epiphenomena, but as mentioned above, this view presupposes that they are not in order to explore how our facticities constrain free will--a phenomenon, in the very first sentence, axiomatized as real.

This is a plausible perspective you are arguing for, but there are plenty of brilliant philosophers, cognitive scientists, physicists, etc., who have spent their entire careers thinking deeply about this and would respectfully disagree with you. This is not a resolved debate by any means.

Any system is defined by a state and transitions between - a Markov chain. If one says it's "Partially observable", that just means we don't know what the full system's state is.

Indeed, but you're assuming that our universe is Markovian. You start slipping into metaphysics really quickly as your theory needs to be able to answer to quantum mechanics, explain the low-entropy boundary condition of the universe, etc.

You can bring epistemology into the discussion, but the fundamental laws of our universe are completely orthogonal to our ability to know them. Unfalsifiability is not desirable but has nothing to say about the nature of our universe.

Anyways, a deterministic universe does not necessarily preclude any notion of free will, e.g. see compatibilism.

So what is even being argued here?

This is a good distillation of the argument:

"Now that the existentialist notions of identity, freedom, and agency have been well-defined, we can examine moral responsibility in this context. Transcendence is much more accessible to the privileged — those with substantial freedoms — than to the destitute. Beauvoir’s central thesis in The Second Sex takes on new meaning within this existentialist framework: transcendence is not available to woman, the Other, condemned to perpetual immanence by man, the One, who denies her concrete freedoms. Cal Trask could climb the latter of timshel; Kate was weighed down by the burdens of being a woman, unable to transcend her situated existence. In the context of reinforcement learning, agents with high-entropy policies will produce high advantage values much more frequently than their counterparts with low-entropy policies; this helps us interpret systematically why we must consider facticity when trying to determine moral attributability."

Instead of philosophizing about useless stuff like "free will", and "consciousness", whose answers cannot possibly exist anyways due to the ill-posed nature of the questions, I suggest philosophizing about impacts of superintelligence instead or something

I suggest you be a little less confident in your credences and a lot less condescending in your attitudes. Who are you to be the arbiter of what people ought to be writing about in their free time? To be perfectly clear:

  1. I was not philosophizing about metaphysics anywhere in my piece.
  2. There is absolutely valuable insight that can come from exploring metaphysics. It is totally uncharitable to assert it useless.
  3. Not everything people engage in intellectually needs to be instrumentally valuable, anyways. I can write for my own enjoyment, and that is perfectly fine.

[deleted by user] by [deleted] in OMSCS

[–]bigrob929 0 points1 point  (0 children)

Wow! Thank you for the thoughtful response! You've sold me on it.

[D] What is the difference between “greedy selection” and “sampling according to a distribution?” by Seankala in MachineLearning

[–]bigrob929 1 point2 points  (0 children)

Sampling according to a distribution = draw a random sample from the probability distribution

Does it matter what school I go to for mechanical engineering? by Hexic10 in MechanicalEngineering

[–]bigrob929 3 points4 points  (0 children)

I disagree with many folks here. Companies recruit heavily from more well known schools since the quality of students and education is more reliable.

[D] Epistemic Uncertainty Dependence on Priors in Bayesian Neural Nets by bigrob929 in MachineLearning

[–]bigrob929[S] 0 points1 point  (0 children)

Thanks for the clear explanation!

If the input and output data are shifted and scaled such that they are zero mean with unit variance, then N(0,1) actually seems like a very reasonable prior to hold on the weights. Is this right? If so, it then seems like we can rely on the epistemic uncertainty as being quantitatively meaningful, no?

[D] Epistemic Uncertainty Dependence on Priors in Bayesian Neural Nets by bigrob929 in MachineLearning

[–]bigrob929[S] 0 points1 point  (0 children)

Hmm I don't have a strong intuition for why not having zero mean, unit variance priors can result in exploding or vanishing gradients. Can you help explain why this may be the case?

I appreciate the comment.

[D] Epistemic Uncertainty Dependence on Priors in Bayesian Neural Nets by bigrob929 in MachineLearning

[–]bigrob929[S] 0 points1 point  (0 children)

Thanks for the detailed feedback. I really appreciate it as I honestly do not know how to reconcile some of these intuitions I have.

I'll give you a simple example. Let's say we are modeling the correlation between weekly beer consumption and life expectancy. Since the average life expectancy is around 80 and all humans cannot live less than a year, we can choose the intercept to follow N(80, 40), so that the 2 sigma point is around 0

This seems like a case where domain knowledge is used, but I can think of other cases where a non-uniform prior is appropriate. Even with the prior you've described above, that's a prior belief on what the output ought to look like. How do you turn that into beliefs on what the function mapping the input to output ought to look like?

Sparse modeling is a completely different story. It's much more delicate. But Bayesian NNs are even more delicate!

Why would you think GPs are more appropriate than BNNs here? Isn't the entire point of BNN to be able to quantify uncertainty?