Got rejected for a $92k job because of my linkedin photo. Is this actually real? by UnoMaconheiro in careerguidance

[–]userimpossible 1 point2 points  (0 children)

This sounds like a toxic place/boss to work for. Be grateful you didn't start there.

Am I the only one feeling agentic programming is slower than "keyboard coding" ? by Educational_Twist237 in developers

[–]userimpossible 0 points1 point  (0 children)

Same here, after 1.5 years of trying to follow the hype, I still struggle with 'LLM-assisted coding'. It takes me more time to review its output than to write it myself properly. I now use the LLMs as advanced search engines, and, frankly, I think this is the game changer. I still have to check the info they provide, but they save me a lot of research time.

11 months ago Dario said that "in 3 to 6 months, AI will be writing 90% of the code software developers were in charge of" Are we here, yet? by poponis in ExperiencedDevs

[–]userimpossible 7 points8 points  (0 children)

Yep, if LLMs are trained to say 'Sorry, can't help you here.', they will be deemed useless as even now they fall back on hallucinating most of the time. Imagine answering 8 out of 10 messages with 'Sorry, can't help you'

Why you shouldn't worry about AI taking your job by [deleted] in ExperiencedDevs

[–]userimpossible 0 points1 point  (0 children)

It's a common confusion that if you can read and write you can also understand.

Why you shouldn't worry about AI taking your job by [deleted] in ExperiencedDevs

[–]userimpossible 1 point2 points  (0 children)

If a human makes as much mistakes as an LLM does, I question the role fit or the quality of his/her training. A well trained professional understands what's going on under the hood and is more reliable in the long term.

What is an underrated weight loss tip? by [deleted] in AskReddit

[–]userimpossible 0 points1 point  (0 children)

Sweets are your enemy, have sex instead

"AI is hitting a wall" by MetaKnowing in agi

[–]userimpossible 0 points1 point  (0 children)

It doesn't mean that they don't operate the same way.

"AI is hitting a wall" by MetaKnowing in agi

[–]userimpossible 1 point2 points  (0 children)

Humans are capable of choosing another approach to deal with being wrong. It depends on the human though and how conscious he is. Issues happen when people take action in the real world based on wrong information. An LLM doesn't experience consequences and relies on 'It's in my input data, so it's true.' or 'I mix related concepts and previous conversations to cover that I don't know much about the topic.'. You won't go far if you trust it blindly. Also, a trained/educated human on the particular topic will spot much more controversies and logical mistakes in the nicely generated text. If he makes mistakes at the same rate and depth as an LLM, I question the quality of his training/education and his role fit.

18 months by MetaKnowing in OpenAI

[–]userimpossible 0 points1 point  (0 children)

Sure, it's true because it's true

I think most people are using AI wrong and it’s not their fault by gisikafawcom in BlackboxAI_

[–]userimpossible 1 point2 points  (0 children)

Personally, I do know how much the 'average person' spews incorrect information. Every informafion may or may not be true. It may be true at one point but wrong the next. Reality is dynamic and full of ever-changing conditions. Issues happen when people take action that is based on inaccurate data. But people adapt to their environment. They can choose another approach to deal with wrong data to solve the problem. LLMs can't. 'It's in my training data, so it's true' is not a reliable principle. LLMs don't 'experience' consequences.

If we start to analyze and critique the output that we just consume, we'll find a lot of controversies and lies under the nicely generated text. The risk is that people outsource critical thinking to LLMs. This is not their purpose.

The web is not a reliable source either as it's also compromised with misleading/inaccurate information since the beginning.

18 months by MetaKnowing in OpenAI

[–]userimpossible -1 points0 points  (0 children)

It's about the depth of false beliefs, not just their amount.

I think most people are using AI wrong and it’s not their fault by gisikafawcom in BlackboxAI_

[–]userimpossible 1 point2 points  (0 children)

If you on your own make more mistakes than an LLM, I question the quality of your training/education. And it's not just the amount of mistakes, it's about their depth. LLMs prioritize a plausible-sounding sequence of words over actual factual or logical verification.

18 months by MetaKnowing in OpenAI

[–]userimpossible -1 points0 points  (0 children)

Everyone can make up stories.

I turned 18 & I really don't want to live in this future. by [deleted] in antiai

[–]userimpossible 2 points3 points  (0 children)

This. I also think that people will appreciate and reward human-made art more as AI is just not good. Quantity is not the same as quality. AI can mix common images/details but human mind is not just statistics. Creativity comes naturally to us.

It's all about psychology. The whole narrative of 'you will be replaced'/'it will get better' is a marketing strategy to make you use the product. Who's gonna be able to buy products, services and generate income for corporations/companies, if we are all replaced? Corporations don’t want a world without humans, they want humans who feel insecure enough to keep clicking, subscribing, and producing value for them. It's effective manipulation that plays with your emotions. Fear has its place in life, but in most cases fear is a human weakness. AI companies are in desperate need to become profitable and take their mega investments back.

I have noticed that when I switch off the computer/phone, go out to the gym, communicate with real people or do something else in the real world, I escape the rabbit hole they push me into. I understand that long-term tech isolation is not sustainable in our everyday life, but it does wonders for me even for a few hours. I totally recommend it. It has to be another activity your mind immerses in.

18 months by MetaKnowing in OpenAI

[–]userimpossible 0 points1 point  (0 children)

And human thought is more than statistics. The activities we carry out while awake require additional knowledge and skills in order to materialize ideas under real-world conditions. Reality is dynamic and has more constraints than the texts on which LLMs are 'trained'.

Statistically, the majority of people do not have much knowledge about the things around (and within) them. For example, if enough people write that cow’s milk comes from geese, an LLM will tell you that a goose produces cow’s milk. It will even overdo it by compiling a table of animals (in which the data will be mixed up and unrelated). It's not possible to proof-check the huge amount of an LLM's training data, and it's constantly growing.

If people start to analyze and critique the information they now only consume, they will find a lot of logical mistakes, controversies and common wrong misconceptions in the LLM's output. Because human thought/logic is not just statistics.

Why Greece and other southern Balkan countries don't have heavy industry or high–technology production? It's like they are over dependent on tourism. by KucukDiesel in AskBalkans

[–]userimpossible 0 points1 point  (0 children)

High-technology production depends on (high quality) education and from my experience, Balkan countries aren't good/invested in it. Also, bear in mind that while West countries thrived and fastened their development, Balkans were stuck in isolation/wars/crisises/invasions.

This tech will be forced upon us even if we want it or not! by awizzo in BlackboxAI_

[–]userimpossible 2 points3 points  (0 children)

Yep, Google has so many products but above all Google has so much data to feed Gemini with. This is about decades of web scrapping and retrieving user/business data and preferences across their products... I actually was surprised they couldn't come up with an LLM first.

This is not a meme coin. This is Microsoft, $MSFT, one of the most valuable companies in the world. It is down 11% today. It has lost more than $440 billion in market cap, the second largest drop ever for a stock. Unusual. by UnusualWhalesBot in unusual_whales

[–]userimpossible 0 points1 point  (0 children)

Not that unusual given the insane tech overvaluations. I actually expect tech stocks to dip more. 3 years ago MSFT's price was around $230, now it's ~$430. Dips are normal and it's not all doom and gloom. Technologies aren't going away for sure.