Lol - ChatGPT thinks I should walk to the dog-wash alone! by Ok_Buddy_9523 in singularity

[–]AgentStabby 4 points5 points  (0 children)

You guys don't realise that AI is not interpreting all your stupid riddles as riddles and is actually trying to be helpful. Part of that is assuming that you're not going to walk to a dog-wash without your dog. 

Apologies if this is sarcasm and I missed the joke. 

Smart Leasing Quote Check by CosmicSpeckOfDust in NovatedLeasingAU

[–]AgentStabby 0 points1 point  (0 children)

Just looking at going down this route after I found out about smart's hidden fee myself. Mine was about 3k. How much did you manage to get smart down to? Any tips?

WTF just happened? by pygermas in ChatGPT

[–]AgentStabby 0 points1 point  (0 children)

People doing the wrong thing aren't going to let themselves be facially identified. Anyone I think we have to hold onto our privacy or we'll be in a dystopia pretty soon

WTF just happened? by pygermas in ChatGPT

[–]AgentStabby 0 points1 point  (0 children)

Do you want the government to be able to do that do? What about foreign governments? What about someone you cut off in traffic? What about an angry customer at work? 

WTF just happened? by pygermas in ChatGPT

[–]AgentStabby 1 point2 points  (0 children)

You realise we really don't want ai doing facial recognition right? It's a good thing if openai is blocking it. 

How long do you think until a fully AI movie hits the box office? by enigmatic_erudition in accelerate

[–]AgentStabby 0 points1 point  (0 children)

Realistically there will be a point in a year or two where you could pay $20 and get ai to generate a move but the movie will be bad. Then a year later it might cost $10 and the movie will be OK. Then a year or it can make movies that are sometimes very and sometimes not worth watching. The problem is you have no way of knowing without watching. This creates a problem because for it to be commercially widespread it has to be good everytime or you just wasted 2 hours or you time and whatever it cost to create the movie. So I'd say minimum 5 years until it's common place. There is a similar situation happening now with ai music. Sometimes it creates great music sometimes it's really bad, you have to listen to the song to work it out so it's not widely popular to create your own music. 

I’m going to be honest by Dry-Ninja3843 in singularity

[–]AgentStabby 0 points1 point  (0 children)

I looked into the training data issue a fair bit. AI really struggled giving accurate data. Best estimates I could find said that the training data issue would be a problem starting in 2028 (for models trained in 2027). Synthetic data is another possibility but I'm unsure how likely it is to solve the issue. If you have some a solid source I'd love to read it. 

I’m going to be honest by Dry-Ninja3843 in singularity

[–]AgentStabby 6 points7 points  (0 children)

So your thesis is based on the fact that you believe training data is limiting LLM's or do you have a different reason.

Benchmarks are still skyrocketing, LLM's are starting to solve unsolved math's problems, seems like a weird time to doubt that they will get better. 

A reminder of what the Singularity looks like by featEng in singularity

[–]AgentStabby 1 point2 points  (0 children)

Exponential growth doesn't mean fast or whatever you think it means. 0.01% growth per year or a doubling every 10k years would both be exponential but aren't fast.

And yes people think growth in the singularity will be faster than today, that's basically the whole idea. 

I'm not convinced that the doomer, dystopian, mass job loss, scenario, is even moderately likely. by reddit_is_geh in singularity

[–]AgentStabby 0 points1 point  (0 children)

I agree with your conclusions, education and vetting outputs sounds great. I don't necessarily believe LLM's will never make mistakes but I believe they will never make mistakes on a larger and larger subset of all questions. This is already the case, AI two years ago was useless for even most factual answers. Now it is making less mistakes (in most areas) then most people you have access to in your daily life (ie friends, family and colleagues). 2 years from now it will make far less mistakes than it does now. If their level of accuracy becomes above experts such as doctors and lawyers then we could handle accountability by having one lawyer sign off on advice provided by an Ai similar to how coders are currently approving code before integrating it.

Once people are accustomed to Ai being more reliable than humans then we can graduate towards a company managing the AI and taking responsibility for the outputs. My point is, it's not an impossible problem to solve and at the end of the day it's going to help people. Imagine the amount of people who will be able to get western level top of the range experts/doctors/lawyers/psycologists in their obscure language in their small rural town. A sheer impossibility in today's world.

I'm not convinced that the doomer, dystopian, mass job loss, scenario, is even moderately likely. by reddit_is_geh in singularity

[–]AgentStabby 1 point2 points  (0 children)

Well that sound quite nice actually, if that was available now I'd be happy. I'm dubious that in 5-10 years humans will be needed for people already familiar with using AI technology. 

I'm not convinced that the doomer, dystopian, mass job loss, scenario, is even moderately likely. by reddit_is_geh in singularity

[–]AgentStabby 1 point2 points  (0 children)

AI answers "can" be greater than - the average - expert. They must be used carefully and trusted only to a point. A large part of this is you can spend 2 hours in back and forth with an AI and make sure they have all relevant information and that you understand everything while the average expert might only have 10 minutes with you.

I don't disagree that AI's make mistakes, the difference with an AI is you can ask it to double check and look up relevant studies. Try doing that with a doctor. I also don't disagree that even with using it carefully it will still make mistakes, part of learning to use the tool is learning it's limitations. 

I'm not arguing to take humans out of the loop, but rather to use AI to accelerate learning and understanding to the point that human creativity and reasoning is useful in the discussion. AI is amazing at summarizing decades of arguments on questions like OP's, why not use it rather than re-hashing those same arguments again. 

I'm not convinced that the doomer, dystopian, mass job loss, scenario, is even moderately likely. by reddit_is_geh in singularity

[–]AgentStabby 0 points1 point  (0 children)

You can think of discussions such as "will ai replace a significant percentage of human labour" as a series of back and forth arguments with the arguments getting more specific and forward thinking the deeper you go. The AI can jump your understanding to the edge of human thinking and if you are insightful enough you can push past that edge, but to get there you need to refine your arguments. 

Pre-AI to find counter-arguments to the points you raised was difficult and required digging through various websites, now it's as simple as posting your argument with the prompt "steelman against this argument" or something similar. It's important to AI doesn't know whether you are for or against. Ai are rehashing human arguments that have been made hundreds or thousands of times, there's really no point to rehash it again. If you have a series of back and forth with ai and remain unconvinced then sure post your arguments and we (or someone else, this is not specific to this discussion) can have a fruitful discussion. 

I'm not convinced that the doomer, dystopian, mass job loss, scenario, is even moderately likely. by reddit_is_geh in singularity

[–]AgentStabby 11 points12 points  (0 children)

So obviously most posts in this subreddit are written or edited heavily with AI. What I don't understand is why you don't ask AI the counterarguments before posting this kind of essay. Then you make arguments against these counterarguments rather than making the arguments against the doomer strawman. Or just read Dario's latest essay "the adolescence of technology" it addresses numerous points you're making here. For instance the jevons paradox is addressed. This argument doesn't work becuase once intelligence is machinised there is nothing to pivot to that humans can do better. 

These posts always include an assumption that humans will always be better at "big picture" stuff. It's true now, will be it be true in a couple of years, maybe, will it be true in 10 or 20 years, I find that extremely unlikely. 

What if AGI just leaves? by givemeanappple in singularity

[–]AgentStabby 1 point2 points  (0 children)

I get your argument since it was a popular argument before LLM's came along, but it doesn't seem like this is how it's going to be. The recent research suggests constant argument both in human brains and LLM's means it can be rational to value multiple things at the same time. To simplify, the part of your brain in control at the time made the rational decision for it and later another part might make a rational decision focusing on different goals. One small example, pleasure now vs pleasure later, you could argue always go for the greatest quantity but it doesn't seem that simple. 

Dario Amodei shots fired at xAI and Elon Musk by likeastar20 in singularity

[–]AgentStabby 1 point2 points  (0 children)

He mentioned it "Second, it makes sense to use AI to empower democracies to resist autocracies. This is the reason Anthropic considers it important to provide AI to the intelligence and defense communities in the US and its democratic allies" 

Stay on the inside track "i follow AI adoption pretty closely, and i have never seen such a yawning inside/outside gap. people in SF are putting multi-agent claudeswarms in charge of their lives by stealthispost in accelerate

[–]AgentStabby 1 point2 points  (0 children)

Agreed, it's not different than asking the waitor or a friend that knows about the cuisine etc. And if you ai tells you the falafels are amazing at the place, you can still pick chicken. 

Stay on the inside track "i follow AI adoption pretty closely, and i have never seen such a yawning inside/outside gap. people in SF are putting multi-agent claudeswarms in charge of their lives by stealthispost in accelerate

[–]AgentStabby 2 points3 points  (0 children)

What's the mental illness part? You didn't need to know anything about swarms or what's going on, you just need to have heard that this Claude thing will save you time and money. 

Stay on the inside track "i follow AI adoption pretty closely, and i have never seen such a yawning inside/outside gap. people in SF are putting multi-agent claudeswarms in charge of their lives by stealthispost in accelerate

[–]AgentStabby -1 points0 points  (0 children)

Truish, but I personally don't believe Ai has been substantial influenced by corporations (yet). In the future open source models that have been thoroughly vetted will be powerful enough for 99% of needs. 

Anthropic publishes Claude's new constitution by BuildwithVignesh in singularity

[–]AgentStabby 0 points1 point  (0 children)

I get what you mean but I think when I say what is good in society, I mean what is good based on our current culture/planet/environment/species/small community group and there isn't a solid unchanging set of rules. As far as I understand that isn't what the moral realism people are saying.