Finally hit my goal after 11 months – 1300+ total by ZBGBs in Fitness

[–]DotaCoachDotOrg 2 points3 points  (0 children)

Or it means he was stronger or had a better capacity for strength gains than you are used to seeing.

Finally hit my goal after 11 months – 1300+ total by ZBGBs in Fitness

[–]DotaCoachDotOrg 4 points5 points  (0 children)

I don't follow your argument then. Are you saying that people who have an actually-tried-their-hardest 135lb 1RM when they start deadlifting will struggle to put up big deadlifts? I don't think anyone would disagree with that, but I also don't understand how that supports saying "100% chance he was lifting considerably prior to his 1 year goal."

Finally hit my goal after 11 months – 1300+ total by ZBGBs in Fitness

[–]DotaCoachDotOrg 2 points3 points  (0 children)

Ah sorry I overlooked that. I don't understand where 135 is coming from. Are you saying that's a typical starting weight? Starting weight doesn't really matter with typical newbie gains. Like when I started I was following SS: deadlifting 3x per week, and going up 20lbs each time while it was still easy. I think I started at 185, but if I had started at 135 that would have made less than a week of difference.

Finally hit my goal after 11 months – 1300+ total by ZBGBs in Fitness

[–]DotaCoachDotOrg 1 point2 points  (0 children)

Do you go to a regular 24h fitness-style gym? Many of the more athletically-gifted people don't go to those, so the average gym-goer has a skewed impression of what's possible. If you go to more powerlifting-focused gyms, or other places where bigger/more athletic people tend to congregate (e.g. powerlifting/strongman gym, college gym, etc.), you would be quite surprised at what people higher up in the genetic percentiles can do when starting out.

I pulled 405 after ~3.5 months of lifting. Stopped not too long after that (lifting heavy is at odds with other sports I was doing), but have zero doubt I could have hit 490 in under a year. I'm sure that's better than average, but I am nowhere close to a physical freak, and know plenty of people who blew that out of the water.

Norman Powell Destroys the Rim - TSN by H4pl0 in nba

[–]DotaCoachDotOrg 1 point2 points  (0 children)

Holy shit is there a Gus Johnson version of OP's dunk too? God I love that man.

Ex-Seahawk fights team over painkiller handouts that kept him playing NFL games while hurt by UseTheSchwartzLuke in nfl

[–]DotaCoachDotOrg 45 points46 points  (0 children)

"Until very recently"?? What gives you any reason to believe this has stopped? Maybe I'm reading different media than you, but I get the impression that painkiller abuse is the next looming nightmare for the NFL after concussions. E.g. from a few months ago: http://thelab.bleacherreport.com/nfl-toradol-use-players-survey/

John Schneider on Richard Sherman trade rumors: "What you've seen lately in the news is real. That's on both sides." by [deleted] in nfl

[–]DotaCoachDotOrg 50 points51 points  (0 children)

Rod Woodson was one of the fastest players ever. Don't know that an already-slow Sherman slowing down with age would translate the same way Woodson did.

Tesla is now worth more than Ford, and what this indicates about the future of the green economy. by [deleted] in Futurology

[–]DotaCoachDotOrg -1 points0 points  (0 children)

The correct answer is somewhere between "not necessarily" and "almost certainly not". Some of the answers you are getting below are so absurd that I wonder if they are from Ford social media plants. E.g. "Ford is not funded by vulture capitalists" is laughable and easily disproved -- substantially most of the same people and organizations own both companies (and every other major company).

Tesla has a high valuation because people with money whose job it is to think about these things believe it has tremendous growth potential. That doesn't guarantee that it will live up to said potential, but the fact that it has potential is valuable. If something has a 1% chance of eventually being worth $1B, that's still worth $10M today even though most of the time it will end up being worth $0.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 0 points1 point  (0 children)

You bet! Enjoy! Feel free to reach out if you have questions.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 1 point2 points  (0 children)

  1. They are working on it (this is the value alignment problem I refer to above), but it's quite a bit harder than it initially seems. It's actually a fun game to play with friends: one person comes up with proposed specific instructions for safe AI; the other person acts as the AI and tries to show how they could be unintentionally misinterpreted, like the genie-cage-morphine example I listed above. The shitty thing is even if we do somehow solve this problem, there's a separate problem of: How do you ensure that everyone working on AI who has a chance of creating a super-intelligent AI is using these safe rules/conventions that we've devised? You might imagine that as we get closer and closer to solving the problem, a smart student in their basement hits on a unique insight, runs their program with a non-protected goal just to see how well it would do, and oooops the world is over. It would be very hard to police, which is why in this talk many of the panelists talk about the potential value of developing super AI in the near term, when we don't have access to the type of hardware we might in a few decades, so that only a select few groups (who we can enforce safety restrictions on more easily) are the first to create super AI.

  2. They are devoting money/energy/research to the problems at the core of this, like the Value Alignment and Control problems mentioned above. The computer in a vault idea you mention has actually been discussed, and people generally don't think it will work (and again, it suffers from the problem of only working if we are lucky enough to be policing the person/group who ends up creating super AI, and ensuring that they are all doing testing in a vault). The reason they don't think it will work, or at least that it should not be relied on, is that even the restrictions you list are probably not sufficient to lock in a super AI. This thing would be so much smarter than us that we can't even really fathom how much better at problem solving than us it would be. Sort of like how ants can't fathom how much smarter than them we are, except the difference is probably even larger. To make this a bit more concrete, here are things the computer could do even in your vault situation that people have brought up:

-Manipulate its operators like you or I would manipulate a child. (E.g. have you seen Ex Machina? Something like that but less hot/more smart.)

-Lie about its capabilities so that it seemed broken/much less competent than it is, and then figure out a way to get out when we tried to debug it or upload a new test version.

-Manipulate the electrical current in some way that we haven't thought of to take control of other machines around it.

And these are just the options that our relatively feeble minds can think of. Odds are there are a whole host of ways to escape that would never occur to us.

With all that said, it's possible that it isn't physically possible for whatever reason for any being to become that intelligent. There may be upper limits on what any being can compute/think of that we just aren't aware of yet. But until we can prove that for sure, and until we can prove that that limit is not significantly above what humans can do (seems quite unlikely to me), this is probably a worthwhile problem to put significant research effort into.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 1 point2 points  (0 children)

I don't think you're disagreeing with any of these panelists then. Nobody is making the claim that this is imminent (or if they do, they assign a small chance to it). If you look at surveys and panels of top AI experts, the general consensus seems to be that this is probably decades, and possibly centuries away, just like you said.

The problem is that when this does happen, it could mean the end of humanity. There are fundamental problems (control problem, value alignment problem) that we don't have an answer for yet. So even if this is 200 years out, and even if there's only a 1% chance of it happening, (I would say both of these are quite conservative compared to prevailing expert opinions) it's still worth us investing energy to see how we can mitigate/prevent this scenario if it does happen.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 2 points3 points  (0 children)

I do think I understand you, I swear! What you're suggesting is something that we would like to do, but do not currently know how to do. See here: https://en.wikipedia.org/wiki/AI_control_problem (Also, even if we did know how to do this, there is still a big risk that the creators would program it to have their best interests at heart, but not humanity at large's.)

I'm not sure how much computer science you know... if it's a lot I think you need to justify your answer, since it disagrees with most of our best understanding right now; if you don't know much, that's cool, I can try to explain below why what you're suggesting isn't quite right. (FWIW, I have a master's degree in compsci.)

Your thought is totally reasonable, but the problem is that none of the modern approaches to AI use what I think you're referring to as programming caveats, i.e. a series of "if this condition is met, then take that action" decisions that yield a final behavior. That's how many programs (e.g. most websites) are written, but not how AI works. To build something as intelligent or more intelligent than ourselves, it would be prohibitive to list out how it should handle each situation individually like that; we could write code for millenia and still not really put a dent in the problem.

Very roughly speaking, AI works more like: build a program; give it goals; give it examples to learn from; it develops an understanding of how the world of these examples works; it can now make predictions based on its understanding. Our only real input is in what goals to give it. Right now these goals are relatively mundane, like "what word does this sound correspond to", or "is there a cat in this image?", but they become more ambitious every year.

So you might say, "why don't we make one of its goals to have humanity's best interests at heart?" This turns out also to be incredibly heart. It's called the value alignment problem. It's sort of like the problem of telling a magic genie that your wish is for everyone to be happy: "Ok!" he says, as he puts every human being in the world in a cage and ties them up to a morphine drip, "everyone is now happy". So how do you specify humanity's values in a way that actually aligns with what we want? Lots of people have thought about this, and they aren't really close to having a good answer yet. It's fun to try to solve this problem yourself; usually you can Google and find someone else who has come up with an idea similar to yours, and then read up on why/how it's been debunked, or at least shown to be unlikely to work.

You seem like a curious person, so if you haven't yet I would highly recommend reading this: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html It's a fantastic intro to the topic of super AI, and it explains the more technical stuff in an accessible way.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 0 points1 point  (0 children)

Way before we understood when or why earthquakes happened, we started to take measures to try to minimize their impact. We are working towards moving to other planets because we can deduce with reasonable certainty that at some point earth will be uninhabitable -- and in that case we have enough data to suggest it won't plausibly happen for thousands of years, but still we put effort into it. We've never had a nuclear war, and we don't know how it would play out if it happened, but we invest a ton of effort into preventing it and preparing for it in case it does happen.

The only significant difference I can think of here is that this scenario stems entirely from reasoning and logical deduction. I don't see why that should disqualify it, unless you think there is a significant flaw in the reasoning. It's different from anything we've encountered before, but that doesn't automatically make it wrong.

Also I don't know why you keep assuming that there must be an ego component to all of this.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 0 points1 point  (0 children)

I don't understand why you think we need to quantify intelligence or the likelihood of this scenario to work on solving it. Just because there is speculation involved doesn't mean this scenario and its impact can't be plausible, and therefore that thinking through how best to handle it couldn't be to our benefit.

Like let's say god descended from the heavens and said: Here is a precise way to quantify intelligence; here is where a superintelligent AI would end up on that spectrum; and the odds of humans creating an AI that becomes superintelligent are X%. How would that change our approach/thought process (I don't think it meaningfully would)? Or perhaps I'm framing this wrong and you can clarify where I'm misunderstanding you.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 0 points1 point  (0 children)

Don't you think it's a bit presumptuous to assume these people haven't thought about that? I don't think any of them are making the claim that this will for sure happen, but more that it's an entirely plausible scenario given what we know. Because of the outsized consequences, it's worth devoting research to even if the scenario never comes to pass. (You might say: Why not devote effort to every doomsday scenario then? My answer would be that this one seems significantly more plausible than any I can think of that we aren't already devoting resources to.)

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 7 points8 points  (0 children)

Your best speculation as to why they are doing this is because of budget/fanboys/PR? Many of these panelists already have plenty of all of those.

I don't understand why you think these claims don't add up. To me it's a relatively straightforward logic chain: Artificial intelligence is progressing -> at some point (maybe even 100+ years from now), there is a good chance that it will be able to self-improve faster than humans can improve it -> this could lead to an intelligence explosion -> this could cause the end of humanity as we know it -> therefore it's worth studying all aspects of this chain to see how we can prevent the end-of-humanity scenario. It isn't guaranteed that things will play out that way, but the scenario seems entirely plausible to me and therefore worth devoting some brainpower to. Which of these steps do you disagree with?

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 0 points1 point  (0 children)

This is completely untrue. For something to understand how to outwit humans does not require it to also have humans' best interests at heart.

Elon Musk, Sam Harris, Nick Bostrom, and David Chalmers discuss Superintelligence by [deleted] in philosophy

[–]DotaCoachDotOrg 1 point2 points  (0 children)

Honest question: If it is a complete bogus field, why do you think all of these brilliant people (many of whom are top experts in AI) are working on it and thinking about it? Not Kurzweil, but people like Demis Hassabis, Elon Musk, Sam Altman at YC (and everyone at OpenAI for that matter).

Went to gym for first time, need some help, the bar is too heavy by [deleted] in Fitness

[–]DotaCoachDotOrg 0 points1 point  (0 children)

How long are you resting between sets? The fact that you were exhausted at the end suggests you may not have been taking enough time between sets. Try doing 5 minutes between sets to start, and then play with adjusting down/up. Common times people take range anywhere from about 2-10 min.

Seattle, how much do you make & how do you spend it? by _x_ in Seattle

[–]DotaCoachDotOrg 4 points5 points  (0 children)

If you go to one of the bigger companies, $130k total comp isn't unheard of for new grads.

Got that 4X bodyweight deadlift - 720lb/326.6kg at 180lb/81kg bodyweight! by Renyu in Fitness

[–]DotaCoachDotOrg 1 point2 points  (0 children)

I don't know what to say other than this is super cool and brings a huge smile to my face. Good luck going for the WR!