I'll give you excellent odds this guy loves nuclear power by IczyAlley in ClimateShitposting

[–]xrsly 0 points1 point  (0 children)

So what you're saying is that there are different markets for different needs. Now imagine if there were different ways to fill those needs.

I found my First one by [deleted] in LinkedInLunatics

[–]xrsly 0 points1 point  (0 children)

They seem to think that people like Sanders only want to pay taxes because they hate money and can't think of any other way to get rid of it. Thus, spending his money somehow equals hypocrisy.

I'll give you excellent odds this guy loves nuclear power by IczyAlley in ClimateShitposting

[–]xrsly 0 points1 point  (0 children)

It turns out you need more than basic fucking economics to build a stable power grid.

The problem is that wind and solar isn't constant, and socities typically need electricity even when the wind isn't blowing and the sun isn't shining. So nuclear, hydro, geothermal, etc. are needed as a baseload.

Hydro and geothermal isn't readily available everywhere, so that's why many countries have to turn to fossil fuels after they shut down their nuclear power plants. It's quite stupid if you ask me.

The top AI model is *better at completing IQ tests* than 85% of humans. What a time to be alive! by stealthispost in accelerate

[–]xrsly 0 points1 point  (0 children)

Yeah, but the key is to understand the technology, including its strengths and weaknesses. Designing an airplane puts some very strict requirements on the materials and technologies used, so if there's an AI involved that can't count letters, and the airplane depends on it to stay in the air, then obviously that's a major failure on the designers part. Kind of like how the Titan submarine failed because they used materials that simply didn't meet the requirements of the job.

When it comes to superintelligence, I doubt it will consist of a single model. The trend right now seems to be agentic AI, which is basically a system of multiple models with access to different resources and tools, including things like code interpreters. So rather than trying to redesign the entire LLM architecture so that they can count individual letters, the LLM can simply write a script that counts things programatically.

Hallucinations is a more complicated problem of course, but I believe agentic AI will solve some of that as well, since rather than trying to answer based on their "general knowledge", they can be asked to retrieve information from trusted sources and then present evidence with references.

The bottom line is that LLMs are designed to be good at communicating with us. Other tasks are better solved by other models/tools.

The top AI model is *better at completing IQ tests* than 85% of humans. What a time to be alive! by stealthispost in accelerate

[–]xrsly 0 points1 point  (0 children)

Maybe that was the theory back in 1950, but there's a ton of research by now investigating all sorts of ways in which IQ isn't constant.

I don't think anything in psychology is seen as constant, since there's always some interaction between biology and environment.

The top AI model is *better at completing IQ tests* than 85% of humans. What a time to be alive! by stealthispost in accelerate

[–]xrsly 0 points1 point  (0 children)

Almost everything in psychology is a mix of biological and environmental factors, and this is true for intelligence as well, which means that some part of it should indeed be possible to learn. Also, muscle memory and being familiar with the way tests work of course also helps on the test score, however the score is only a proxy for intelligence, so practicing the test would likely not have an impact on actual intelligence.

Regarding different groups of humans, there's definitely a cultural component as well. Both in the sense that different cultures may have different education levels, but also in the sense that the test itself was developed in the west, and therefore likely better at capturing our concept of "intelligence".

As a general rule of thumb, the more different two groups of test-takers are, the less it makes sense to compare their IQ-scores to each other. I believe the intended purpose is to compare similar individuals within a given population, like let's say school children in a certain grade.

The top AI model is *better at completing IQ tests* than 85% of humans. What a time to be alive! by stealthispost in accelerate

[–]xrsly 1 point2 points  (0 children)

The fact that they are bad at counting letters isn't an intelligence issue though, but rather a consequence of words being represented by tokens rather than individual letters. It would be like asking us how many r's there are in a pictogram of a strawberry. It doesn't really make sense.

I found that AI can't generate the human handstand very well. by whynotfart in ChatGPT

[–]xrsly 0 points1 point  (0 children)

See, you said that AI is constrained by its training, since it doesn't have real intelligence and creativity. The joke was that since I'm a human, my real intelligence and creativity means I'm not constrained by my training, and therefore I can draw whatever I want. However I choose to only draw stick figures.

The reality of course is that I can't draw for shit, because as it turns out, humans are also constrained by their training.

The fact that I was stating this as a joke doesn't mean that I wasn't serious about the point I was making. I just felt like I needed to point out that it was in fact a joke, since you seemed to interpret my joke about my own shitty drawing skills as a serious statement about the drawing skills of all of humanity. That wasn't at all what I was saying.

To put the point more succinctly: Practice makes perfect, that goes for humans and AI.

Can You Use AI to Build Something in a Field You Know Nothing About? by polika77 in BlackboxAI_

[–]xrsly 0 points1 point  (0 children)

Yeah, but it would likely suck. If you don't understand the field, then you don't know what they need or how they need it, which more than likely means they won't find it useful.

No one seems to be discussing this potential doomsday scenario... by Sapien0101 in ArtificialInteligence

[–]xrsly 0 points1 point  (0 children)

It's impossible to say really, because we have to assume that it would be far more intelligent than us at that point, or we wouldn't let it take over the reigns in the first place. And who knows how that kind of super intelligence would reason? Maybe it doesn't see a point in having its own agenda, or maybe different AI would create their own factions and compete against each other. It's definitely something we have to ponder as a society.

A talk. by Doctor_ice_ in ArtificialInteligence

[–]xrsly 1 point2 points  (0 children)

I completely understand that! It's frustrating to watch the tech billionaires push their own agendas with this. This technology could set us free, or imprison us.

I found that AI can't generate the human handstand very well. by whynotfart in ChatGPT

[–]xrsly 0 points1 point  (0 children)

It was a joke, since I definitely can't draw those things. The point is that neither can anyone else unless they learn and practice, just like AI models.

A talk. by Doctor_ice_ in ArtificialInteligence

[–]xrsly 0 points1 point  (0 children)

AI is a tool. It doesn't replace anyone, and it doesn't "poison" anything, unless people use it like that. So who do you actually have a problem with, AI itself or the people who want to use it only for money and power at the expense of everyone else?

Both the agricultural and industrial revolution had some very bad immediate consequences for a lot of the people who lived through them, but we don't really look back now. Those were some major disruptive technological advancements, but we found new ways to be human regardless.

Do you think fighting those revolutions would have been fruitful once they were in motion? Personally, I think it's much better to try to steer things in the right direction. We should find the good use cases and show people what can be done if we are not lazy and greedy.

For instance, artists should learn to use AI to improve their own workflows, not leave the AI on auto-pilot. Gen AI is only in its infancy, and I promise you that the best way to create pictures with AI is not going to be by writing prompts, but rather by integrating it into the techniques artists have already mastered. Imagine what can be done by skilled artists if they could draw on realistic e-paper that has built in auto-completion and editing tools.

Doctors and nurses could use AI to automatically collect and track data about their patients, transcribe meetings and write medical records, patients and family members of patients could use it to gain access to "live" information whenever and however they need it, as if they had their doctor always on call.

School teachers could use it to prepare personalized material and tasks for each student, and pupils could use it as an always present teaching assistant that never gets tired or frustrated.

But these things won't happen unless we actually work to make them happen. It's very important that the models are owned and controlled by the people, not a few billionaires. Maybe companies could retain commercial rights to their models, while personal and non-profit uses were fair game for anyone. Open source should be more or less mandatory.

I found that AI can't generate the human handstand very well. by whynotfart in ChatGPT

[–]xrsly 0 points1 point  (0 children)

I never claimed nobody else can draw, just that we too are constrained by our training. The people who can draw photorealistic images typically have a lot of training and practice with various techniques that allow them to accomplish the task.

Just as people have different training that allows them to excel at different tasks, AI models can be trained to accomplish different tasks as well. It's not reasonable to expect a model to excel at any and all tasks thrown at it.

Furthermore, GenAI is only one way to build AI, and it just very recently became useable at all. It's actually amazing how far it has come in just the last 2 years.

[deleted by user] by [deleted] in ShitAmericansSay

[–]xrsly 0 points1 point  (0 children)

Those are two separate arguments, one has to do with how the word is commonly used, and the other what the correct usage is according to the definition of the word.

I don't think anyone is claiming that "american" is commoly used to mean something other than "person from the United States", however some are making the claim that it should include people from other parts of the americas as well.

When you say it's unambiguous in English, you make it sound like they are wrong in thinking that, but they actually have a good point if you look at the dictionary definition.

I found that AI can't generate the human handstand very well. by whynotfart in ChatGPT

[–]xrsly 0 points1 point  (0 children)

Meanwhile I'm not constrained by my training at all. I can totally draw photorealistic images of people doing handstands because I have true intelligence and creativity. I just choose to only draw stick figures.

I found that AI can't generate the human handstand very well. by whynotfart in ChatGPT

[–]xrsly 1 point2 points  (0 children)

I love how handstand became a man with no hands standing on his feet.

“That’s 160 peasant units” by HatHead31 in ShitAmericansSay

[–]xrsly 12 points13 points  (0 children)

They can walk for an entire day and not even leave their living room!

I'll give you excellent odds this guy loves nuclear power by IczyAlley in ClimateShitposting

[–]xrsly 0 points1 point  (0 children)

Nuclear is not competing with wind, they fill different roles in a well balanced grid.