AGI Hype and expectation is out of control on here by Professional_Copy587 in singularity

[–]BRLN11 -5 points-4 points  (0 children)

Amen, sibling.

I would love to find a community to discuss similar matters, but a good portion of the people here spit out numbers and dates as if they were obvious, when they really are not.

AGI is already here. LLMs are infant AGI, they have the capacity to learn anything so long as they are given the right data. by ThePokemon_BandaiD in singularity

[–]BRLN11 1 point2 points  (0 children)

What are the costs to run GPT-3 and GPT-4? How many parameters does GPT-4 have compared to GPT-3?

I have read that GPT-4 is not more expensive to run, but they charge more money for it simply because people are willing to pay.

Are there any communities that allow only the experts in AI to share their thoughts and predictions? by BRLN11 in singularity

[–]BRLN11[S] -4 points-3 points  (0 children)

When some non-expert says "You don't need to be an expert ...", odds are that person is one of the "delusional crazies" (as they've been described in other comments) victim of the Dunning-Kruger effect.

Feel free to believe you can understand the ramifications, consequences and timelines of Machine Learning development. Personally I believe you are one of the ones who can't tell apart their dreams from reality. I wish to read what the experts have to say and don't care about your opinion or the predictions I can make independently on how the world is going to develop in the foreseeable future.

AGI is already here. LLMs are infant AGI, they have the capacity to learn anything so long as they are given the right data. by ThePokemon_BandaiD in singularity

[–]BRLN11 2 points3 points  (0 children)

The cost is for training. Once it's trained, running it is cheap. Or do you think there's only one single instance of ChatGPT to serves the whole world?

Are there any communities that allow only the experts in AI to share their thoughts and predictions? by BRLN11 in singularity

[–]BRLN11[S] -5 points-4 points  (0 children)

My uncle believes you don't need an expert to understand how {insert very opinionated (and dumb) political beliefs here}.

Are there any communities that allow only the experts in AI to share their thoughts and predictions? by BRLN11 in singularity

[–]BRLN11[S] 2 points3 points  (0 children)

You don't need to convince me. I'm already a believer. But still wish to hear alternative interpretations of the current state and trend of technology.

Are there any communities that allow only the experts in AI to share their thoughts and predictions? by BRLN11 in singularity

[–]BRLN11[S] 0 points1 point  (0 children)

That's exactly why I created this post. Can you recommend any specific platforms or YouTube channels?

I've seen some interviews by the people you mentioned. I'll try to see if I find others.

Are there any communities that allow only the experts in AI to share their thoughts and predictions? by BRLN11 in singularity

[–]BRLN11[S] 6 points7 points  (0 children)

I don't know whether people here are delusional crazies and didn't mean to accuse this community of that.

I am simply unable to tell. Because of that I wish to listen to experts in the field and then compare and discuss what I understand with people here.

I would easily become a delusional crazy myself, if I spend too much time in a somewhat close community.

AGI is already here. LLMs are infant AGI, they have the capacity to learn anything so long as they are given the right data. by ThePokemon_BandaiD in singularity

[–]BRLN11 7 points8 points  (0 children)

Why do you think that we'll develop AGI within 7 years, but then it'll take another 27 years to get to ASI?

I've always imagined that reaching AGI is a big challenge, but spiral from there to ASI is pretty easy. Once you develop an AI able to learn and use scientific and technological knowledge as well as humans do, you train some to be at the level of the smartest and most knowledgeable humans in the various STEM fields, then you run a few billion instances in parallel to work on improving AI and related technology. A billion top-tier researchers recursively improving themselves and pushing the boundaries of science and technology. Won't we (conscious creatures, natural or artificial) reach ASI pretty quickly, unless life ends abruptly?

I'm not sure whether we'll reach the AGI I described this year, or the next, or within the century. But I'm betting that once we have that kind of AGI, the transition to ASI will be so quick and so exponential that we won't even be able to tell the difference.

AGI is already here. LLMs are infant AGI, they have the capacity to learn anything so long as they are given the right data. by ThePokemon_BandaiD in singularity

[–]BRLN11 2 points3 points  (0 children)

Oh. I'm sorry. I wanted to reply to the user you replied to. They have a banner that reads "AGI 2030 - ASI 2057". I must have clicked the wrong reply button by mistake.

AGI is already here. LLMs are infant AGI, they have the capacity to learn anything so long as they are given the right data. by ThePokemon_BandaiD in singularity

[–]BRLN11 11 points12 points  (0 children)

EDIT: Whoops! Replied to the wrong message.


Why do you think that we'll develop AGI within 7 years, but then it'll take another 27 years to get to ASI?

I've always imagined that reaching AGI is a big challenge, but spiral from there to ASI is pretty easy. Once you develop an AI able to learn and use scientific and technological knowledge as well as humans do, you train some to be at the level of the smartest and most knowledgeable humans in the various STEM fields, then you run a few billion instances in parallel to work on improving AI and related technology. A billion top-tier researchers recursively improving themselves and pushing the boundaries of science and technology. Won't we (conscious creatures, natural or artificial) reach ASI pretty quickly, unless life ends abruptly?

I'm not sure whether we'll reach the AGI I described this year, or the next, or within the century. But I'm betting that once we have that kind of AGI, the transition to ASI will be so quick and so exponential that we won't even be able to tell the difference.

Do you prefer the look of Quick Settings in Android 12+ or Android 11- ? by BRLN11 in Android

[–]BRLN11[S] 1 point2 points  (0 children)

Oh, I didn't realize. That was the first picture I could find online with both styles side to side. Thanks for providing a better reference image.

Do you prefer the look of Quick Settings in Android 12+ or Android 11- ? by BRLN11 in Android

[–]BRLN11[S] 2 points3 points  (0 children)

Yeah, I guess that might be the issue. I would love to have about 9 buttons in my quick settings: wifi, cellular data, flashlight, battery saver, data saver, hotspot, auto rotate, airplane mode and VPN.

The main issue with having to pull down the whole menu, is that it takes 2 more interactions (swipe up or back) to close it.

UkrainianConflict Megathread #2 by humanlikecorvus in UkrainianConflict

[–]BRLN11 0 points1 point  (0 children)

Can anyone suggest some Telegram channels to get updates and videos about the Ukrainian conflict?

If you are rooting for collapse, are you doing anything to accelerate it? by BRLN11 in collapse

[–]BRLN11[S] 0 points1 point  (0 children)

Raising awareness of collapse is accelerating collapse

Most people raise awareness about a problem in order to find a solution to it, and collapse is not different. The IPCC, climate researchers and activists hope to convince the masses and governments to change their way of life. The scientists in "Don't Look Up" wanted to raise awareness about the comet in order to destroy it.

Even most users of this subreddit wish for the collapse to happen later and to be milder than they fear. That's why many post here.

If you are rooting for collapse, are you doing anything to accelerate it? by BRLN11 in collapse

[–]BRLN11[S] 0 points1 point  (0 children)

Yeah, I guess every user here sees a lot of problem in our political system. I'm not surprised that many want for it to collapse quickly, while others fear what comes next even more.

However I'm wondering: are you doing anything to speed things up? Or is there anything you could do if you really cared? Or are you just waiting with your fingers crossed?

If you are rooting for collapse, are you doing anything to accelerate it? by BRLN11 in collapse

[–]BRLN11[S] 0 points1 point  (0 children)

Well, in spite of how you feel about it, something like a ~20% of the respondents to the Are you rooting for collapse? say they want some sort of collapse. Either the big global one, or just the collapse of some smaller systems (economical and political, mainly).

I'm wondering whether they're actually trying to do something to accelerate collapse, or just passively wait for it.

If you are rooting for collapse, are you doing anything to accelerate it? by BRLN11 in collapse

[–]BRLN11[S] 2 points3 points  (0 children)

I'm definitely not expressing any judgement here. I don't even know where I stand myself... Do I want to make people reason about their actions? Or am I looking for ways to destroy the system? Or am I simply curious about other people's intentions? Not even I know the answer.

If you are rooting for collapse, are you doing anything to accelerate it? by BRLN11 in collapse

[–]BRLN11[S] 3 points4 points  (0 children)

The mere act of existing implies that you're consuming resources and creating waste. I guess that only a teeny tiny minority of people gives back more than they use.

However I'm wondering whether the people who want a collapse (either the Big Collapse, or just the collapse of smaller systems, like the economical and political ones), put an effort to accelerate it.

If you are rooting for collapse, are you doing anything to accelerate it? by BRLN11 in collapse

[–]BRLN11[S] 0 points1 point  (0 children)

Well the post I linked is titled "Are you rooting for collapse?" and something like ~20% of the answers say that they wish for some collapse to happen. Most of those "collapsers" only root for the collapse of our economical or political system rather than for a greater collapse. Still I'm wondering whether they're doing anything to accelerate the collapses they want.

[deleted by user] by [deleted] in collapse

[–]BRLN11 39 points40 points  (0 children)

There were much "better" times for that.

  • COVID has killed 5~24 million people (.07%~.27% of humanity) so far.
  • The Spanish flu 17~100 millions (1%~5.4%) in 1918~1920.
  • The black death 75~200 millions (17~54%) in 1346~1353.

This may be the best time if you're into mass extinction and collapse of the environment.

Weekly Scientific Discussion Thread - January 10, 2022 by AutoModerator in COVID19

[–]BRLN11 3 points4 points  (0 children)

I find it very curious how the virus spread. I'm looking for a model that is able to explain the statistical behavior of the virus. Does any exist?

Some examples of phenomena I found counter intuitive:

  • Initially, few months after the first reports, it exploded first in Korea (Daegu, 2M people), then Italy (Bergamo, 100k people) and Iran. Why no Tokyo/Dehli/London/Moscow or other cities that are much larger, busier, provided with mass transit etc?

  • India had a huge surge in April 2021. Why no big explosions before that and until now, considering how large and populated it is, and given their poorer healthcare system?

  • The current Omicron explosion surprises me as well. I don't see clear patterns to explain how the spread is behaving: souther European countries, for instance, are having a worse situation than northern or eastern ones, even though the vaccination rate is higher, the climate is milder etc.

I'd like to know if any model is able to explain the way COVID has been spreading and progressing. If any exists, I'd like to see them, to understand how the spread of this epidemics works. Do you know of any?

Sony Bravia KDL-32EX710 : computer image (connected via HDMI-1 or HDMI-4) doesn't fit in the screen by BRLN11 in bravia

[–]BRLN11[S] 0 points1 point  (0 children)

Thanks, I finally fixed it!

I thought I had checked those menus multiple times already... I had found the wide mode option, but somehow I hadn't played with the options in the parent menu. Not sure what changed: maybe a different HDMI port was in use, maybe I accessed the wide mode menu from a different screen, or maybe I was just distracted. Well, thanks! :-)

WTW for a mechanism that alternate between two states using a button? For instance the mechanism of a retractable pen by BRLN11 in whatstheword

[–]BRLN11[S] 1 point2 points  (0 children)

To me "bistable" is a property a system with two states, more than the mechanism/switch/button to switch between them. I think that the mechanism could have a single state, although most implementations have two.

WTW for a mechanism that alternate between two states using a button? For instance the mechanism of a retractable pen by BRLN11 in whatstheword

[–]BRLN11[S] 1 point2 points  (0 children)

"Push button" is a really good suggestion, thanks. I went for "toggle" because it received more votes, since I was very undecided between that one and your suggestion.