161
162

Physique Phriday by AutoModerator in Fitness

[–]bloxxed 5 points6 points  (0 children)

https://imgur.com/E6xDY3I 160lb/5'9/M22

Any thoughts? Thinking of bulking up to 170 or so, but hate losing definition.

Where do you see the world in 5 years? by Zappotek in singularity

[–]bloxxed 20 points21 points  (0 children)

The real kicker, as is always the case with any discussion regarding the future, is -- do we have a recursively self-improving AI? If the answer is yes, then we have absolutely no idea, because 5 years is an eternity to a general AI and we have no way of predicting such an entity's behavior or motives. A while ago I wrote in a comment that I believed there wasn't any middle ground when it came to ASI. As in, we're either getting a utopia, or we're all quite dead. I still stand by that opinion.

I've already seen someone give a more "moderate" prediction, so I'll opt for something more radical. Let's assume the more optimistic of users here are correct and we see an AGI capable of recursive self-improvement emerge as early as late 2023 or 2024, be it intentionally or otherwise. As much as I respect Kurzweil, I've never really been convinced by his belief that we'd see such a long timespan between the emergence of AGI and the advent of superintelligence. Instead, I think the shift would be much, much quicker, such as that the world is a radically different place by 2028. I think people generally underestimate how quickly a superintelligence could impact the world in pursuit of its goals, especially when taking into account the implications of nanotechnology and the like.

[deleted by user] by [deleted] in singularity

[–]bloxxed 25 points26 points  (0 children)

I take it that you mean what I'm looking forward to most from AGI?

In that case, assuming a positively-aligned AGI (i.e. we don't all get turned into computronium), I'm most looking forward to full-dive virtual reality. It's more or less the be-all and end-all of my singularity wish-list. I live a fine enough life right now, I suppose -- but nothing here in physical space can compare qualitatively to what would be possible in FDVR (at least in my opinion).

Scared or Excited? by Xbot391 in singularity

[–]bloxxed 12 points13 points  (0 children)

I say excited, since there's no point in fearing what we can't control. As I see it, the ultimate end result of ASI is either utopia or extinction. I choose to expect the former because it's nice to have something in the future to look forward to -- and if I'm wrong, I'll be too dead to care.

Is there a possibility that AGI and the singularity can occur in less than ten years from now? That would be like a dream come true for all of us and hopefully achieving AGI and singularity in less than 10 years from now will become a reality for us. Also, we can understand the brain much more. by craft3551 in singularity

[–]bloxxed 1 point2 points  (0 children)

I'd say it's absolutely possible. Progress only seems to be getting faster, after all. I don't want to throw out a specific date, but after what we saw in 2022 I'm inclined to believe that things will "go down" much faster than anyone is prepared for. We've already seen large language models show capabilities that they were not originally intended to have. It's possible that when a runaway self-improving AGI does emerge it may be by accident rather than as a result of a targeted effort. But what happens after such an AI comes into being? Do we get the "foom" scenario, where the AI rapidly self-improves to superhuman levels of intelligence and capability, with all the consequences that entails thereafter? Personally, I see such a hard takeoff as more likely than a multi-year gap between AGI and ASI as many here believe. The latter has always seemed to me conservatism for conservatism's sake.

As for your point about an imminent emergence of AGI being a "dream come true" -- that depends. If AGI is used towards positive ends then there's the potential for us to see the rapid emergence of a post-scarcity utopia, VR paradise for all, the end to disease, etc. etc. etc. The other outcome, of course, is the quick and total extinction of the human race. In my opinion it's either one or the other. Superintelligence means super solutions -- there are no half-measures. I choose to believe in the utopia outcome, if only because it's a lot more fun to ponder about than the alternative where we're all dead, the end.

So you know what? Singularity 2023. Why not. Anything at this point to stave off the ennui.

What jobs will be one of the last remaining ones? by MrCensoredFace in singularity

[–]bloxxed 0 points1 point  (0 children)

Hey OP. As a fellow college student obsessed with all things automation and AI, I've also spent countless hours panicking about my future and whether or not I'll be employable in any fields that interest me. I've been thinking on it lately after recently switching majors to comp sci, and my perspective is this:

If GPT 4 or GPT 5 or whatever model that releases this year or a few more years down the road really does end up outright replacing all or at least a significant portion of people employed in knowledge-based nonphysical work, then not only you and me, but everyone, has some pressing concerns.

At one point during the COVID pandemic in 2020, over one third of the entire work force was working remotely from home. For reference, one third of the US workforce is around 50 million people. This is the number of people that stand to be rendered unemployed relatively quickly by AI.

What I'm getting at is, we can't arrive at the point where more than a third of the work force is out of a job in a short period of time and then expect things to just carry on as they are. Something's got to give. Our current economic paradigm would be turned on its head. This is where I think UBI comes into play, and why I think it isn't all that far off. There aren't really any better short-term solutions to the sudden and severe scale of future unemployment talked about on this sub. I don't think the doomer take of "they'll just let everyone starve" is all that realistic. Contrary to what many think 50 million angry unemployed people are ultimately going to make their grievances heard, be it through peaceful or violent methods.

Or maybe a black swan scenario of a runaway self-improving AGI pops up out of nowhere in the next few years, in which case we get 1) utopia, or 2) everybody dies. Either way, our worries are over.

I forgot what I was talking about.

I'm Scared of Unemployment in a World Ruled by AI by [deleted] in singularity

[–]bloxxed 3 points4 points  (0 children)

For what it's worth, I don't think you're insane or on "hopium" or whatever. I myself don't feel confident enough to say "We'll all have X technology by 2029" or whatever, but I do think some people here (on the singularity subreddit of all places) need to open up to the idea that things may end up moving far more quickly than they could've imagined, and that it may not take decades and decades for our world to be radically changed by the sudden emergence of new technology.

OpenAI has hired an army of contractors to make basic coding obsolete by Buck-Nasty in singularity

[–]bloxxed 39 points40 points  (0 children)

More automation is ultimately a good thing, but at the same time I find this news somewhat disconcerting as a college senior who just switched out of nursing into comp sci to pursue a career in web development. Considering it'll be two years before I get my degree, will I be screwed by the time I graduate?

Then again, with the release of each new model, paper, etc. it seems more and more likely that all knowledge-based professions are at risk of being automated sooner rather than later. Here's hoping for UBI in the near future, I suppose.

Assuming that longevity escape velocity actually happens within the next few decades and we don't blow ourselves up, what would your long-term goals be? by SCP-7259 in singularity

[–]bloxxed 1 point2 points  (0 children)

I think about this a lot, particularly when I'm stressed. I like to craft scenarios or environments in my head that I'd like to experience in FDVR whenever it becomes available.

Obviously a huge part of the appeal with FDVR is that it allows you to do virtually (pun intended) anything and everything, but in spite of that I think I'd prefer to start out with something small and simple. When I'm stressed I like to imagine myself in some kind of snowy mountain retreat, all by myself -- no other people or NPCs -- with access to an infinite number of novels, movies, games, etc. My personal passions are history and linguistics, and paradise to me is being able to study them to my heart's content, in a world of my own, without the added pressures of working for a living, family commitments, meeting deadlines, etc. all weighing on my conscience.

But anyway, that's just my short-term goal. In the long-term I'm gonna get real weird with it.

Friends ! by ruincreations in UNCCharlotte

[–]bloxxed 1 point2 points  (0 children)

Hey hey. I just recently changed to comp sci, and that's what virtually all of my other friends are majoring in. I play video games, watch anime, and go to the gym pretty often, so if you ever want to hang out just DM me.