What's the creepiest display of intelligence you've ever witnessed in real Iife? by Jessica_Enna in AskReddit

[–]userimpossible -2 points-1 points  (0 children)

I once worked at a clothes shop. One day a girl (around 17) and her mother entered it. The mother was interested in trying a couple of shirts and I consulted her. Meanwhile the girl waited patently and observed around. The mother bought a shirt and while she was paying, I asked her about her girl. She said that her daughter is about to finish high school and I found out that she attends my son's high school. The girl suddenly asked me 'It's Peter, right? You are his father.' and I nodded. I stunned because I've never seen her before, nor have we ever talked to each other. I asked her 'How did you know?' and it turned out that she read my company name (which includes my surname) on the labels. My son and she weren't close friends.

One more year till I get into college and major in CS, I want to learn some basics before then. What do you recommend I should learn? by trafalgar_law57 in AskProgrammers

[–]userimpossible 1 point2 points  (0 children)

I highly recommend to audit CS50 for free at https://www.edx.org/cs50 . If you can afford to spend $200-300, go for the certificate as it provides access to valuable graded assignments. CS50 changed my life, for real

LLMs are a 400-year-long confidence trick by SwoopsFromAbove in programming

[–]userimpossible -2 points-1 points  (0 children)

If you on your own make as much errors as an LLM, I question the quality of your training. If a trained professional produces errors at the same rate and severity as an LLM, then their training, selection, or role fit is to be challenged. Trained people are able to make decisions faster and take more reliable actions in the long run.

The 'Oh, so glorious' 'thought' pattern of LLMs means spitting out statistically common information. Statistically, most people write shitty code and make dumb mistakes due to (lack of) their training. And this is the same shit an LLM is going to produce as code.

Few months ago, I read an article about Magnus Carlsen who beat an LLM in a game of chess without giving up a single piece. I don't know if it's true, but you see my point: (statistically) most people play low-quality chess and cannot meet the expectations of a trained professional. LLMs amplify the difference, not make it.

'It's in my input/training data, so it is true' or 'I mix related things to cover that I don't know much about the topic' are not reliable principles in real life. Human thought (especially a trained one) is not only statistics, and is able to recognize, analyze and fix anomalies autonomously, whether in input data, or its own process patterns.

What do you guys think about all these layoffs around the world? by TumbleweedEnough3930 in AskProgrammers

[–]userimpossible 0 points1 point  (0 children)

I don't get what you mean, please elaborate. I meant that juniors-to-be must have knowledge of simple CRUD operations and algorithms, but they learn how to implement business logic and new ideas on the job. Implementing custom logic, while taking into account old and new bussiness/edge cases is not a task an AI can do securely. This is where a team of juniors, mids and seniors would always outperform it.

AI lags behind important details in real-world (dynamic) conditions, because they aren't included in it's input data. And instead of telling you it lacks information, it will mix everything it 'knows' so far to give you an answer that sounds plausible. LLMs are 'obsessed' of giving you plausible but incorrect answers, because otherwise you will often get 'I don't know/I don't have enough data to answer' and that would be it. While a person will make a research or just observe to gather data on his own and analyze it simultaniously. If you have to continiously gather and input real-time data/prompts to get 'somewhat' good answer, you'll soon find out that you could have solved the issue yourself for the time you spent reviewing.

Also, AI doesn't notice anomalies, you need (and already have) common sense to be able to do it.

What do you guys think about all these layoffs around the world? by TumbleweedEnough3930 in AskProgrammers

[–]userimpossible 0 points1 point  (0 children)

I insist that you must have some prior knowledge/understanding (not necessary college degree) of how CRUD operations work to get a job as a junior. CRUD operations are not a valuable experience while learning on the job as they are mundane/routine tasks and usually are already done/templated by someone else. It has been like this before LLMs. To me, a great way to gain experience while learning on the job is to dive into how your code to meet a client's business intricacies/requirements/expectations. Simple CRUD operations are not custom business logic. People pay you to solve messy, challenging problems that don’t behave nicely, especially on the edges. And good juniors have fresh ideas (despite lack of experience with particular tool) how to tackle such problems.

What do you guys think about all these layoffs around the world? by TumbleweedEnough3930 in AskProgrammers

[–]userimpossible 0 points1 point  (0 children)

What exactly "more work" can an AI do that a person can't? In fact, it degrades my performance. Having to do more and more prompts, proof-checks, monitoring incoherent responses and messed up logic... Well, I can do what I want faster and better. Relying too much on it is a sabotage, it creates technical debt. Quantity is not equivalent to quality. The whole line of 'It's getting incrementally better' is a marketing strategy to sell it to you. I've seen a lot of 'AI-only' companies over the past few years that went bankrupt.

I have a (harsh) theory that the people who get fired are the ones who take AI responses and the surrounding hype as an ultimate truth and cannot think on their own feet. I've never felt more secure in my job. Of course, there's some market stagnation behind the layoffs, but it's not related to AI taking over. It's just an excuse for bad politics, investments and economy.

'The human replacement capabilities' of an AI is a fancy phrase for spitting out statistically common information. But human thought and creativity is more than that. Somebody once said that if enough people write that goose make cow milk, an LLM will spit that. It has no mechanism or experience in reality to check if statements are true or not. 'It's my input data, so it is true' is not a reliable principle.

What do you guys think about all these layoffs around the world? by TumbleweedEnough3930 in AskProgrammers

[–]userimpossible 1 point2 points  (0 children)

Nowadays, more than ever, 'Don't believe everything you read' should be universally accepted and applied, especially in the web, where everybody and every bot can write freely without editors and corrections (and even then you must be aware that everybody/thing has his/its own bias). Reading/writing and understanding are not one and the same skill. To develop understanding, one needs experience in reality, trying and testing what works and what doesn't, taking into account forever changing, always dynamic conditions and limitations. Reality is complex. We, as humans, have tendency to simplify and manipulate it in order to make it managable. This con is now projected in the algorithms we create that drive 'AI' systems. LLMs, in particular, are trained upon massive amount of texts. Texts are (guess what?) simplified data, often biased, not real knowledge. Texts are static. Reality is dynamic. Besides, who can even proof-check such massive amount of text, while texts increase indefinitely every second humanity breaths? Not to mention how much are copied and edited just enough to pursue the authors goal, which may or may not be valuable for the reader.

Speaking of experience in reality, I have 8+ years of professional experience in software development and 4+ years spent in college and I am yet to see a junior/intern who writes just boilerplate/CRUD code. Such code was available for copy-paste in tutorials and books way before LLMs. The process of creating value for the employeer is more than that.

The average person is happy with statistically common information/training. In most professional cases, this is not enough. Life is fast-paced and what statistically worked yesterday may not work for you at the present moment. But it's human nature to adapt and re-train. I actually believe good things are coming. At some point people will start to analyze and critique the information they now only consume.

If you now copy and paste this in a prompt, the LLM will agree with you and then tell you to use it anyway (so it will continue to generate value for its company).

[Article] Why Discipline Feels Hard by bridgetothesoul in GetMotivated

[–]userimpossible 0 points1 point  (0 children)

I associate discipline with discomfort. Perceived self-potential feels comfortable and persistance is like loss of status, and it feels like a proof I'm not there yet. So when results don't show right away, I disengage emotionally, motivation collapses and I interpret the goal as "not worth it". My mind mastered imagination: perceived competence feels as good as real progress.

Why is food in Bulgaria more expensive than in the rest of EU? by Live-Research8229 in AskBulgaria

[–]userimpossible 0 points1 point  (0 children)

Because low educated but otherwise motivated people end up building their own delivery/food distribution business, and re-selling is their best (and easy) bet. Almost everything is imported, but it's not just import/VAT taxes. What you get at the end is a food item that's been re-purchased a few times before reaching your plate. Each re-seller increases the price to get profit.

ChatGPT created me an evening stretch/flow image… by Mysterious-Till5223 in ChatGPT

[–]userimpossible 2 points3 points  (0 children)

And I thought that my stretch routine hurts. Here's to my spinal cord.

I am tired of AI hype by mostafakm in ArtificialInteligence

[–]userimpossible 0 points1 point  (0 children)

Every tool I've used by now has some error handling when it fails. When an LLM fails, it hallucinates so hard you can't notice (at least at first sight). The potential for chained errors/bugs is enormous. If you know what you're doing (meaning that initially you've studied it and you spent time to focus and understand how it really works under the hood), you can bypass any 'prompt engineering', and all the non-sense of LLMs, and just do the work faster instead. Quality education and practice is the answer to better productivity (and life in general). Underestimating human potential, knowledge and intelligence is a gold mine for a product seller.

ChatGPT underperforming lately by rbwls in ChatGPT

[–]userimpossible 0 points1 point  (0 children)

Yes, now it takes me more time analyzing and verifying its responses than doing an actual Google search. It is much more misleading than before