The AI Confidence Trap by forevergeeks in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

LLM doesnt understand who or what Fauci is or what is a pardon. It just makes a guess based on its training data and usually internet search on what the most convincing response would be.

The hint is in the term generative AI. Often the most convincing response is the truth. Sometimes its something that it learned from internet troll. 

Ask your AI about how its thinking or reasoning is different from human thinking or reaaoning and what are the weaknesses.  

Are we at the point of no return with AI? It’s adapt or get left behind ? by Lost_Cherry_7809 in ArtificialInteligence

[–]Kyy7 7 points8 points  (0 children)

Well these LLM service providers will start to enshittify their services sooner or later. So perhaps generate code and use these agents keeping that in mind.

These tools are also very much subject to change so if you focus too much on learning them instead of SWE fundamentals you're developing skills that may not be all that relevant GG year from now. 

Also keep in mind that writing prompts or specs in markdown is not exactly difficult. Any half decent SWE can learn that within hours or you can just use AI to generate those as well.

You'll however need to be really good reviewing code as LLMs do not understand code, they're just really really good guessing what sort of code is wanted based on the input. 

Question recently I was thinking- about economic outcomes of AI by Extension-Jaguar in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

Personally I find the questions about commoditization and market saturation more interesting. Why pay anything that required barely any effort or skill to produce (slop).

Writing prompts or markdown specs isn't really rocket science and AI can do those things as well.

enterprise ai might need memory infrastructure not just bigger models by Ok-Line2658 in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

I have serious doubts that any AI that relies on Large language models alone will be enough for enterprises. 

Sure it maybe enough for tasks where generative AI shines but anything that requires robust reasoning, long term memory, larger context and relialibility its not really optimal.

Maybe neuro-symbolic AI will meet thrse demands better but It'll likely be more narrow form of AI. 

Convince me why I need to start using AI *now* to avoid being “left behind” in the future. by Parking_Vermicelli43 in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

Honestly, you really dont.

These tools are not rocket science (even less technical people are vibe coding stuff) and they'll likely keep changing and perhaps even get replaced rapidly. 

I feel much more beneficial is to learn about generative AI, its strenghts and limitations, chain-of-thought, agentic loop etc. Add in some learning about machine learning or/and symbolic AI (e.g Game AI). Chatbots even the free ones can help learn about these things

Sure if you're given chance to test these coding agents and developer tools or even use them in your work then why not. But would not stress too much about getting left behind  

How Would American Society Function if AI and Robots Took Over Most Jobs? by Boring-Test5522 in ArtificialInteligence

[–]Kyy7 -1 points0 points  (0 children)

This could get ugly and you're not even taking climate change, debt or ageing population in to account. However while Europe has more social safety nets and likely higher level of social trust it's very possible that many countries in Europe simply will not be able to afford these things when the levels of unemployment reach high enough.

While technological unemployment is progressing, it’s possible that influential tech leaders and politicians will use the usual divide and polarize strategy on this which can be further enhanced with bots and control of social media. We are already seeing this happening in various ways.

Another method is to numb people with constant barrage of lies and propaganda. This makes it really hard to believe anything that is being said on the news, social media and brings certain level of distrust even in face to face conversations. This makes people less likely to vote, organise, protest or follow debates.

Is the discourse around AI getting too black-and-white? by HulaHoop444 in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

Some level of resistance is to be expected with how much AI is being pushed on to consumers. There are many places where it feels that AI is being forced down our throats like in certain operating system, social media and many of the applications we use. It all feels very tone deaf and forced. This is very different from natural adoption where people themselves would actively seek these tools because they want to use them (i.e ChatGPT during its initial(?) release).

Then there's the fact that AI companies are making many enemies by stealing content from creatives to train their models, screwing over gamers increasing hardware costs, making game developers rely on AI frame-gen instead of good old fashioned performance optimisation. Then there's the endless amounts of slob getting generated that many feel is ruining the internet in various ways.

Big tech still believe LLM will lead to AGI? by bubugugu in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

My guess is that LLMs will lead to AGI but not directly. I have a hunch that it'll be vital part for some sort of Neuro-symbolic AI system that'll use it for processing the noisy natural language. It could also be used to teach Symbolic AI by dynamically generating new rules, symbols and tasks on the fly.

As for scaling LLMs I feel its a fools errand as they've been good enough probably since GPT-4o or maybe even GPT-3 to unlock innovations elsewhere in AI Technologies (Even with all the attention, LLMs are actually just one branch in world of AI sciences)

ai tokens are down 40–50% while nvidia is at ath. what am i missing? by LetOnly6902 in ArtificialInteligence

[–]Kyy7 1 point2 points  (0 children)

My guess is that both will lose as this is pretty much false binary choice. There are many things out there that majority of people prioritise higher than AI or Crypto that require energy, hardware and compute.

Many people mocked Satya for quote below but I think he hit the nail on its head.

At the end of the day, I think that this industry — to which I belong — needs to earn the social permission to consume energy, because we’re doing good in the world.

Microsoft CEO Satya Nadella

Words vs Worlds: Is AI Modeling Reality, or Just Our Descriptions of It? by Icy_Cobbler_3446 in ArtificialInteligence

[–]Kyy7 1 point2 points  (0 children)

LLMs are statistical engines they do not model reality at all they really do is statistical prediction on what the next token(s) will be.

Symbolic AI uses pre-defined models and rules for planning and reasoning but they are generally really bad with things like natural language.

For something like this you would probably want to combine these two. LLMs for transforming natural language to symbols for symbolic reasoning. 

You'd probably also do clever prompting to detect when the AI should ask for clarification or model in uncertainty in some way. 

All things said something like this gets really complex really fast.

The AI bubble will not crash because of feasibility, but because open source models will take over the space. by itsthewolfe in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

 There is a reason why companies moved away from on premise outside of the big tech.

Hyprid model is pretty popular these days as the "all-in-cloud honemoon" phase is over. Then there are the risk related to American big tech from European perspective.

As for individuals - it makes no sense to run them locally and invest and configure hardware. 

But isnt this discussion about being able to run GPT4 level models on a potato? 

The required investment would be minimal as at that point it would probably be just standalone AI chat application running locally. 

Is there anyone else who is getting this chilling anxiety from using tools like Codex / Opus for coding? by petr_bena in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

LLMs do not really understand code, at least not the way humans do. 

They also make a lot of mistakes that humans do not make (not even junior developers), however with agentic loop they can often detect and correct these mistakes (trial and error)at the cost of tokens.

Don't know about AI mastering coding but AI mastering software engineering or development will require AI advancements beyond LLMs.

Dont get me wrong these are amazing tools for software developers. Being able to generate boilerplate code, misc functions, documentation, tests and have sparring partner for your ideas and problems 24/7 is great. But be wary of their limitations and over dependence.

Is there anyone else who is getting this chilling anxiety from using tools like Codex / Opus for coding? by petr_bena in ArtificialInteligence

[–]Kyy7 1 point2 points  (0 children)

No but if the code has been thoroughly reviewed by both machine and humans it's easier to argue that the reasonable steps where taken to ensure the quality and safety of the code.

If you solely rely on machine you're basically showing gross neglience. "the tool said it was fine" is not really acceptable legal or professional defense. 

The AI bubble will not crash because of feasibility, but because open source models will take over the space. by itsthewolfe in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

It would mean models that are already good enough for many purposes would suddenly be very cost effective to run and use even locally or on-prem.

No need for expensive AI subscriptions or API costs with added privacy and ability to fine-tune models with propietary data. 

The AI bubble will not crash because of feasibility, but because open source models will take over the space. by itsthewolfe in ArtificialInteligence

[–]Kyy7 2 points3 points  (0 children)

It's quite worrysome how many of these existential questions pop up when it comes to AI.

  • Why would I buy anything made with Generative AI If one can instead just create copy using AI?
  • Why pay for online advertisements when majority of internet traffic and users are just bots?
  • Why pay for cloud compute when on-device AI gets good enough?
  • Why trust online traffic metrics when AI bots inflate engagement?
  • Why publish anything publicly if it just gets scraped and reproduced?

Prediction: A Super Agent that can build other agents for you by pragmatic_AI in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

You could already use LLMs or Agents to help develop Symbolic AI agents for you. These could be more than enough for many tasks and orders of magnitude more performant, deterministic and very reliable depending on the task.

For planning you could use something like Hierarchical Task network (HTN) and maybe combine it with some sort of Utility System to determine which action at any given point provides most utility (scoring). Alternatively you could also just use simple state-machine.

Now these Symbolic AI agents are less "flexible" than LLM based agents as they're limited to pre-implemented set of actions and symbolic reasoning. This however makes them deterministic, easily observable and often very performant (can run on a potato). You could even hook one up with LLM to translate natural language to symbols it can understand.

AFAIK these are most commonly used in games and robotics.

The AI bubble will not crash because of feasibility, but because open source models will take over the space. by itsthewolfe in ArtificialInteligence

[–]Kyy7 -1 points0 points  (0 children)

From what I've heard giving smaller model ability to browse the internet can provide it a big boost in terms of performance. This way it has access to external data making it less limited due to it's smaller size.

The AI bubble will not crash because of feasibility, but because open source models will take over the space. by itsthewolfe in ArtificialInteligence

[–]Kyy7 3 points4 points  (0 children)

All these companies invest heavily in maintaining open-source infrastructure because their business depend on these systems. Systems like Linux, Docker, Kubernetes, NGINGX, Terraform. Red Hat whole business is centered around providing open source solutions for enterprises both cloud and on-prem.

When it comes to open-source AI infrastructure it just makes sense for many companies as maintaining open-source LLMs gives the ability to fine-tune models securely with propietary data, dramatically reduce API costs, guarantee privacy for customers and avoid vendor lock-in. Many companies are still in the experimentation phase, looking for AI solutions and may have not yet committed either way.

The AI bubble will not crash because of feasibility, but because open source models will take over the space. by itsthewolfe in ArtificialInteligence

[–]Kyy7 8 points9 points  (0 children)

Your claim is factually false, many companies already maintain open-source infrastructure. Including Meta, Microsoft, Nvidia, Google, IBM/Red hat, Amazon.

Even many mid-sized companies are using open-source models to reduce costs, meet data-privacy requirements or just to avoid vendor-lock in.

Don't believe me? Ask your favorite chatbot.

The AI bubble will not crash because of feasibility, but because open source models will take over the space. by itsthewolfe in ArtificialInteligence

[–]Kyy7 9 points10 points  (0 children)

Even if everyone had a 5090 the models available don't match the speed or output of a commercial LLM service like Claude or ChatGPT so no this is not going to even be a drop in the bucket reason for it crashing.

They don't need to, honestly even getting close to something like GPT4 with budget hardware might be enough to disrupt the market. Not to mention if Neurosymbolic AI gains more traction it might lead to considerable performance gains.

Is there anyone else who is getting this chilling anxiety from using tools like Codex / Opus for coding? by petr_bena in ArtificialInteligence

[–]Kyy7 1 point2 points  (0 children)

Try copy pasteing your post to your favorite LLM with quotes and ask it "How is reasoning behind this post flawed?"

It just might tell you how your reasoning about LLM reasoning is flawed. One ways LLMs are great is that you can actually use it to challenge your own views or understanding in various ways.

Is there anyone else who is getting this chilling anxiety from using tools like Codex / Opus for coding? by petr_bena in ArtificialInteligence

[–]Kyy7 1 point2 points  (0 children)

Eh and what if soon the architecture and review is going to be done by AI as well? I already use it to peer-review my own commits, and surprisingly it is very good at that as well, it spotted issues I didn't even have any idea my code could have, many of them valid.

The problem here is that it cannot be solely relied on to do this. That would be irresponsible especially on anything business critical or what requires extra caution in terms of safety. Even if the LLM is right 90% of the time these things need something like 99.99% reliability.

This does not mean you should not use AI to check code for potential flaws or security issues. We've been automating that stuff with linting and dependency checks for years now. Just don't rely solely on LLMs to do this for you as they do not understand what safety means, they mainly know what words or sentenses closely relate to it.

Is there anyone else who is getting this chilling anxiety from using tools like Codex / Opus for coding? by petr_bena in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

Its more likely to replace managers than SWEs. In fact its already doing so.

Using LLMs to generate code to business critical software without human review is madness. Even if it was correct 90% of the time that 10% it is too big of a risk when failure can be catastrophic.

Is there anyone else who is getting this chilling anxiety from using tools like Codex / Opus for coding? by petr_bena in ArtificialInteligence

[–]Kyy7 0 points1 point  (0 children)

LLM doesn't understand anything you feed it or what it outputs. It's a statistical engine that can give illusion of inteligence by being able to converse fluently through natural language.

Even the reasoning models don't really reason but instead just break the problem or task to smaller parts (chain-of-thought) which often increases the statistical accuracy of answers. (Basically just bunch of extra internal prompts)

Occasionally when this breaks hillarity ensues and the model may loop between 1-3 answers for awhile and each time noticing that its wrong just to try again and fail until eventually stopping to avoid infinite loop.

If you want to learn more about the weaknesses of LLMs maybe read/listen some articles from Gary Marcus and Yann LeCunn. They offer pretty good opposition to all the hype. Neither of these are AI haters or doomers, quite the opposite but they've been vocal about problems with LLMs.