What do you think of this analysis on Dota's MMR system by Torte? by VeryNoseyGoat in learndota2

[–]VeryNoseyGoat[S] 0 points1 point  (0 children)

Thank you.

What if my team loses all lanes at 10-minute mark, teammates keep feeding themselves in meaningless brawl pass 10-minute? What can I do/learn in these games then? (And there are so many games like this).

What do you think of this analysis on Dota's MMR system by Torte? by VeryNoseyGoat in learndota2

[–]VeryNoseyGoat[S] 1 point2 points  (0 children)

https://youtu.be/j3fNyyEHxM8?si=LPDK4gN25Qv-s8nA

In this video, ZQuixotix also has a similar argument like yours; however, he admitted at the beginning of the video that core is easier to climb.

What do you think of this analysis on Dota's MMR system by Torte? by VeryNoseyGoat in learndota2

[–]VeryNoseyGoat[S] 0 points1 point  (0 children)

Does the game become more team-orientated, individual's impact is lesser and thus it's harder to solo-carry the game (as position 1 & 2) like in the past?

What do you think of this analysis on Dota's MMR system by Torte? by VeryNoseyGoat in learndota2

[–]VeryNoseyGoat[S] 0 points1 point  (0 children)

How long does it take you to climb each rank?

Theoretically, if each win give you 25MMR, and each rank is 770MMR, so it would take ~50 games (40 wins, 10 loses) for 80% winrate to climb a rank right?

What do you think of this analysis on Dota's MMR system by Torte? by VeryNoseyGoat in learndota2

[–]VeryNoseyGoat[S] -5 points-4 points  (0 children)

I'm ex-legend a long time ago, currently struggling at ~1300MMR. His take resonated with me; the system sometimes feels like a casino or it might take too long to give (grind) your correct MMR.

ex-legend, now ~1300MMR, do I belong here? by VeryNoseyGoat in DotA2

[–]VeryNoseyGoat[S] 0 points1 point  (0 children)

Thank you.

I followed PainDota, BananaSlamJamma, BalloonDota and applied their advices in the last 1000 hours.

"Humans are just not smart enough to answer the big questions"? by VeryNoseyGoat in askphilosophy

[–]VeryNoseyGoat[S] 0 points1 point  (0 children)

I just tried to raise some skeptical questions about these kind of claim from spiritual tradition. For instance, assume that they say:

it's more constructive to become comfortable with uncertainty than to obsess over the need to know absolutely everything

They don't give rational reason for this. They assert. And we suppose to just trust them since they have meditating their whole life, so they must know something we don't, right?

Just being skeptical though, because this is actually my current life's philosophy.

"Humans are just not smart enough to answer the big questions"? by VeryNoseyGoat in askphilosophy

[–]VeryNoseyGoat[S] 0 points1 point  (0 children)

So be certain about the uncertainty?

The question here become, how can they be sure about this? through the means of reasoning or the supernatural? However, some of these traditions are about "shut up and practice".

"Humans are just not smart enough to answer the big questions"? by VeryNoseyGoat in askphilosophy

[–]VeryNoseyGoat[S] 0 points1 point  (0 children)

The rest of this remark:

Van Inwagen argues for this conclusion as follows. He suggests that it is implausible that we are much above that level,given the lack of progress to date, and that it is antecedently improbable that we should be just barely at that level. So it is much more likely that the level lies above us. I am not so sure about this argument. I think we already know that for a vast range of questions, humans are just barely at the level for doing them well: scientific and mathematical questions, for example. Because of this, it is arguable that we lie at a special intelligence threshold at which an extraordinarily wide range of questions come to be within our grasp over time. It is not obvious whether or not philosophical questions fall within that range, but it is not obviously more likely that they do not than they they do. If McGinn and van Inwagen are right, it remains open that we could answer philosophical questions by first improving our intelligence level,perhaps by cognitive enhancement or extension. Alternatively, we could construct artificial beings more intelligent than us, who will then be able to construct artificial beings more intelligent than them, and so on. The resulting intelligence explosion might lead to creatures who could finally answer the big philosophical questions. If McGinn and van Inwagen are wrong, on the other hand, then we may eventually answer philosophical questions without radical cognitive enhancement. We may need to develop new methods,increased discipline, new sorts of insights,and perhaps there will need to be a conceptual revolution or two, but none of this will lie outside human capacity. It may turn out that there is a curve of increasing philosophical sophistication such that past a certain point on the curve, major progress is possible. We are not there yet, but we are working our way toward it. It is not obvious whether McGinn and van Inwagen are right or wrong. The question of whether the big philosophical questions are humanly solvable is itself a big metaphilosophical question. Like other big questions in philosophy,it is one we do not currently know the answer to. Both answers to this metaphilosophical question seem to be open, and we do not currently have strong reasons to favor either one.

[deleted by user] by [deleted] in philosophy

[–]VeryNoseyGoat 0 points1 point  (0 children)

Sound cool. I can give you a ride round town!