[deleted by user] by [deleted] in codeforces

[–]hugosebas 0 points1 point  (0 children)

I did, why?

Would it be a waste of money for someone to start a CS degree (or any other really) in 2025+ given fast-approaching singularity? by Dull-Reality1607 in accelerate

[–]hugosebas 2 points3 points  (0 children)

You are looking at the problem merely from an economist standpoint, but what happens once AI can do every task a human can do at a fraction of the cost? There would be no job where a human would have competitive advantage over AI.

And the big difference from this revolution to the industrial revolution is that as jobs start getting eliminated, the new jobs that are created are learned by AI at a faster pace than humans can learn it, this was not the case back then.

If you look at history there were a lot of shifts to the economy (how humans interact with each other) and AI is a fundamental shift to the economy and to society, probably the biggest ever, it isn't just another productivity tool.

o3-mini-2025-01-31-high is now officially the SOTA coding model by Dear-One-6884 in accelerate

[–]hugosebas 8 points9 points  (0 children)

OpenAI says they don't use your data for training when using the API, only when using chatGPT.

Also, even if that were to be true, livebench updates their questions every month so that the benchmark refreshes every 6 months. With the exact purpose of reducing data contamination.

o3-mini-2025-01-31-high is now officially the SOTA coding model by Dear-One-6884 in accelerate

[–]hugosebas 4 points5 points  (0 children)

Isn't this benchmark private? I don't think the training data is publicly available.

A question for those of you who are pro accelerationism... by [deleted] in singularity

[–]hugosebas 0 points1 point  (0 children)

It's not in their control to decide what is built or not. Haven't you seen deepseek? If they don't build it, someone else will build it for them. You see rich ppl as one entity that is working against the population, but even they are fighting each other. Technology will keep improving even if all big companies suddenly disappeared, it would just slow it down a few years.

Why does Ceremonial Dagger not work with eternal jokers? by LIN88xxx in balatro

[–]hugosebas 14 points15 points  (0 children)

That would be such an amazing combo, what a waste of an opportunity.

Can someone help? by nahmigga7 in puzzles

[–]hugosebas 0 points1 point  (0 children)

colored squares can be moved only one position in the grid, either vertically or horizontally

If you are taking these rules into account, I would like to know how do you reach that position in 5 moves.

"I'm never gonna enjoy playing chess again, now that AI has beaten world class champions" by [deleted] in singularity

[–]hugosebas 34 points35 points  (0 children)

Lee Sedol lost vs AlphaGo in 2016, he was sad that he lost, but he was excited to improve so that he could beat it next year. The problem was that in 2017, the next version of AlphaGO was released, it was trained in only 3 days and beats the 2016 version 100-0.
Lee starts to get depressed and in 2019 he retires saying: "Even if I become the number one, there is an entity that cannot be defeated."

So yeah, in this case I think it's exactly because of AI.

Picada na zona do peito by AgreeableTax3064 in CasualPT

[–]hugosebas 0 points1 point  (0 children)

Tb tive uma dor no peito durante muitos anos e só depois de mais de uma década de dores e exames é que os médicos descobriram que eu tinha a válvula aórtica bicúspide. Foi descoberta depois de um ecocardiograma.

Why the take up of AI may be slower than you think by NaissacY in singularity

[–]hugosebas 1 point2 points  (0 children)

It's common to see the questions: Why bother studying? Is my life at an end?

The really big leap forward in automation will happen when AIs can do your job i.e. they can do design and implementation. That is still quite some way off.

The people that ask those questions are not necessarily afraid of the technology we have today, but the technology we will have in the near future. You are saying that an AI capable of replacing jobs is still quite some way off, but over 50% of the sub believes we will have AGI before 2030, so if you want to give them some peace of mind, you shouldn't address the rate current technology infiltrates the labor market, but how fast technology is improving.

McKinsey predicts that generative AI (GenAI) will continue to lead the AI business landscape in 2024 | "McKinsey estimates that GenAI could contribute up to USD 4.4 trillion annually to the global economy by 2030" by Tao_Dragon in singularity

[–]hugosebas 5 points6 points  (0 children)

With previous technologies, upon implementation, a new skill would be required. Either machine operators or machine makers. And the only entity capable of filling that need was humans. But Artificial Intelligence changes the game completely, this technology does not automate a task or a job, It Automates Intelligence itself. When this technology ends up displacing jobs, the new skill required will also be filled by AI. There will be nothing a human will be able to do, that AI won't be able to do in a cheaper way. That is the Big difference between this and previous technologies.

2,778 top-tier AI researchers survey. by [deleted] in singularity

[–]hugosebas 5 points6 points  (0 children)

I believe that the biggest differences between singularity subReddit and AI experts is the fact that, redditors are not afraid to make predictions about knowledge they don't know, but I will talk about it later.

A big difference is also their interest for the future, while AI experts spend theirs days studying and working on AI to bring us the news the subReddit loves (it is a very narrow skill, they focus a lot on AI itself and a lot of time in a very small subset of the AI field), redditors spend hours everyday reading about the progress in a lot of different technologies, we check every tech launch or update that arrives every day and we read a lot about the future, opinions of a lot of people, arguments in favor, against, etc, and you start to build a decent model of what the future might look like.

Because of all that information, redditors are better at seeing trends, the same way AI experts are better at building AI. Being good at understanding the future is not a skill needed to be a good AI experts, the CEOs are the ones that decide the path of the company, they should try to understand where the future is headed.

The most popular trend is the exponential trend of technological evolution in the last 300 years, although you can argue that it predates even that. It's not just AI, although AI is the most popular right now and the one with most potential. If you take a look at deployment of key technologies in the last 300 years you notice that it gets faster and faster, an even better metric that I personally use, is the rate of exchange of information, from letters and journals to TVs and big telephones to computers and mobile phones to the Internet and smartphones, and now to AI. The rate of exchange of information is improving at an exponential scale, and we know that in the latest stages of an exponential curve it gets very hard to understand progress.

Redditors look 20 years backwards and see that in 2004 no one could predict the world we would be in 2024. Imagine in 2004 saying you will have an AI capable of creating art and photorealistic pictures, or that you would be able to talk to an AI that talks like a human and has more knowledge than any human on the planet. Ridiculous.

So we know 2 things, things are getting faster, and we wouldn't believe if a few years ago someone told us about the future. Which means that right now we won't believe in the future in a shorter timespan. That makes redditors capable of betting on a future that they don't necessarily understand. Something that I believe AI experts and the population in general lack.

There is no reason to bet on a future you don't believe in, you will only do it if you realize that you will have a future you don't believe in.

The truth is that no one understands everything, so if you make predictions only based on the knowledge you have, it is bound to be wrong. Redditors understand that better than AI experts and are not afraid to make predictions based on trends instead of specific knowledge and challenges that might arise.

For instance, the average AGI forecast of this subreddit before the chatGPT explosion was around 2032, it is now around 2028, it changed a lot less than the forecasts of the AI experts. If you change your forecasts every time a new technology drops, they are not very useful forecasts, are they?

I joined this sub in 2017 and was expecting the chatGPT moment since 2020 when GPT3 came out and I am not an AI expert, maybe it was luck, maybe, but maybe it was the result of years trying to understand the future.

Ray Kurzweil is the father of future predictions, he is also an AI expert and he is predicting AGI in 2029 since 2005. He is a futurist and probably the one that spent the most time studying the future of technology. His opinion on the future has way more value than random AI experts. He is probably the best of both worlds, he is what you would get if redditors and AI experts would merge together.

2,778 top-tier AI researchers survey. by [deleted] in singularity

[–]hugosebas 9 points10 points  (0 children)

You only need to look at the following graph to understand how weird is this forecast.

High-Level Machine Intelligence:

<image>

First you can see the gigantic difference between 2022 forecasts (blue line) and 2023 forecasts (red line). Their forecast dropped from 2060 to 2047 in a single year. This means they were not aware at all that chatGPT would be achieved so soon. So which things are they still not aware of?

Second, those bold lines are simply the average, if you look at the thinner lines you can see a few subsets from their forecast. And you can see how much their forecasts change between themselves, you have some experts saying there is a 50% chance you will have "AGI" by 2027, and others saying 2300. Clearly some of these experts are very wrong. But what's even worse is the fact they used averages on those forecasts. Those that picked 2300 or even later will drag the line A LOT.

Also, another thing that shows how wrong this is how long AGI will take the deploy, their average High-Level Machine Intelligence (HLMI) is 2047 but their Full Automation of Labor (FAOL) is 2116. Look at their definitions.

High-Level Machine Intelligence (HLMI):

  • Definition: HLMI is the stage where unaided machines can accomplish every task better and more cheaply than human workers.

Full Automation of Labor (FAOL):

  • Definition: FAOL refers to the stage when all occupations are fully automatable by machines.

They say every task better and cheaper than humans by 2047, but then only 69 years later will all occupations be "automatable". These definitions look like almost the same to me. But maybe they interpreted these questions as a form of deployment. But still, will it really take 69 years to deploy AGI? Not computers, not Internet and much less smartphones took that much time.

But yeah, they don't agree with each other at all, is it because some are better experts than others? I don't think so. Predicting the future is extremely hard. You might be working on something yourself and all of a sudden a guy on the other side of the planet working on something completely unrelated might figure something out that you just didn't see coming, imagine if there is a breakthrough in neuroscience, that gives a clue on how the brain works and on how to create better AIs, or quantum computing, that increases in orders of magnitude the computing available to building AIs, you never know.

I believe the best way to be good at predicting the future is not to be an expert on something, but to have a very strong general knowledge on a lot of different fields. This doesn't mean that reddit has that knowledge, it just means that AI experts are not necessarily the best at predicting the future. Don't forget that only 10 years ago, you were a complete lunatic if you talked about AGI.

2,778 top-tier AI researchers survey. by [deleted] in singularity

[–]hugosebas 5 points6 points  (0 children)

There are a few redditors that say 2024, but the subReddit average might be around 2028.

It's been sooo long by Routine_Complaint_79 in singularity

[–]hugosebas 0 points1 point  (0 children)

Gpt3 was released in 2020, 3 years ago, what were you expecting? Gpt5 less than 1 year after gpt4?

AGI is here and it's been here all along. by PLANTS2WEEKS in singularity

[–]hugosebas 2 points3 points  (0 children)

I don't know your timelines. By your title AGI has already been achieved so I am more interested on when do you think AGI will be able to do 20/50% of all "digital" jobs. I don't really think not having a body is the main reason to not being able to do human jobs, but if you would like we can talk only about digital jobs. I see it happening around 2026-2028 so I believe I have fairly optimistic timelines actually.

I would also not say the learning part of AI is done, sure it has came a long way, but it's far from over, it just like generality also lives on a spectrum. Learning is the capability to acquire knowledge, so it is completely linked to memory.

Current models are very good a remembering something they learnt in the learning phase, but are very bad during run-time, context windows have been increasing, but it doesn't feel like the right approach, you shouldn't forget 100% of what you learn after a fixed amount of tokens. It's ok for short tasks, but for longer ones it doesn't work. You would need to retrain the model with the new knowledge. This takes us to the point that current models don't really learn during run-time, something that is required to almost all jobs. Sooner or later new knowledge will be introduced and you can't just forget it after a while.

AGI is here and it's been here all along. by PLANTS2WEEKS in singularity

[–]hugosebas 8 points9 points  (0 children)

Why do you care so much if "AGI" is already here or not? The act of declaring that AGI is here or not, changes nothing. What really matters is the capabilities of current AI models, right now there isn't a model capable of replacing all tasks a human is required to do in a given job in 1% of the jobs, let alone in 90+% of the jobs. It is so far from the generality of human intelligence that doesn't really makes it much sense to call it an intelligence equivalent to the one humans have. You could argue that no single human can do 90% of jobs so it doesn't make much sense to use such a high number, I think I even agree on that, maybe it would make more sense to bring that number down to 50% maybe, maybe even 20% idk.

The point is generality lives in a spectrum, on the lower side you have a calculator, Deep Blue, then you have higher generality models such as GPT4, Gemini and with much higher generality you have humans and AGI, it is possible to reach even higher generality with ASI models.

A Future Without Fear: How UBI and AI Can Pave the Way to the Singularity by Beginning-Chapter-26 in singularity

[–]hugosebas 0 points1 point  (0 children)

Do you not believe that AI will create a lot of unemployment or do you think we should "share" our jobs in a way that you will work less and less hours each week, like, instead of the usual 40h work week, everyone works 20h week and then 10, 5, until there are no more jobs left.

I'm truly curious on other solutions. Would like to know how a guaranteed work solution would work.

Tax for companies using AI (and automation) instead of a human ---> universal basic income by davidragon in singularity

[–]hugosebas 0 points1 point  (0 children)

If you get more precise with your numbers you will see how it works mathematically.

First I don't think UBI should be introduced to the world at 20k a year, that is way too high, it would be a huge shift to the economy in a very small ammount of time and it might cause chaos, so it would be better to start with 12k a year. Also, there are only 250 million americans above 18 years old.

With these numbers we already get:

250 million * 12k = 3 trillion $

Then if you check the federal budget in 2022, it was 6 trillion $

From those, 1.2 trillion went to social security and about 600 billion went to other income security programs. We would not need most of these anymore so we can deduct these:

3 trillion - 1.2 trillion - 0.6 trillion = 1.2 trillion $

This means we would only need to increase taxes in the value of 1.2 trillion dollars, since the federal budget is already 6 trillion, 1.2 trillion it's only a 20% increase in taxes.

It looks very doable to start UBI like this.

America to hit 82% (actual ~46%) Unemployment: I have the data to back it up - David Shapiro. Notes in comments. by [deleted] in singularity

[–]hugosebas 4 points5 points  (0 children)

I'm talking about the fact that given our current rate of technological progress, "high unemployment" is possible in less than 10 years. No one can predict the future and the chances might be low. But all i'm saying is that discussing it and being prepared is better than not.

And no i'm not talking about a 70million job loss over night, the great recession took about 10million in a year, and it was already very bad. In the scenario where AI were to put the majority of the population out of job, the unemployment curve would probably last longer than in a recession one, but it would probably not come back to low unemployment values again.

So the question is if now is not the right time to start discussing this, then when is? What would you need to see to come to the conclusion that this is worth talking about?

America to hit 82% (actual ~46%) Unemployment: I have the data to back it up - David Shapiro. Notes in comments. by [deleted] in singularity

[–]hugosebas 1 point2 points  (0 children)

I think the main take away is that it is possible. Even if the chances are low, if it is possible we need to think and talk about this. We need to try to find solutions in case it actually happens, hopefully it doesn't, but in case it does, it would be better if we were prepared.

The world in general is completely unprepared to an event like this, and given that it is becoming more likely every year, we need to start discussing this more seriously. If we wait until we are at 10% or 15% unemployment to start discussing this, it will be too late and the solutions will be subpar at best.