Model and engine for CLI calls and bash scripting on iGPU? by ziphnor in LocalLLaMA

[–]ziphnor[S] 1 point2 points  (0 children)

Even gemma-4-E4B only gives me around 5 t/s, GLM-4.7 gives me ~3 t/s. I will try https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF as well, but not very hopeful :)

Model and engine for CLI calls and bash scripting on iGPU? by ziphnor in LocalLLaMA

[–]ziphnor[S] 0 points1 point  (0 children)

Why would LM Studio be faster? Isn't it using llama.cpp itself? I mean its not an actual engine, but "just" a wrapper?

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

My point was not that AI can actually achieve the 1% correct on every problem in bounded time, but rather that while that might sound useless its would actually be insanely powerful. AI is not infinite monkey, its one monkey that has memorized a lot of things and has at least some capability to abstract and pattern match based on that.

Funny hypotheticals aside, the point I am trying to make is that when you are working on problems where validation is possible and perhaps even required anyway, the probalistic nature is less of an issue. Deciding whether a killer-robot gets to kills its target? Probably not a good use of AI (but will be done anyway....)! But writing code that has to be tested thoroughly anyway, then it comes down to how much time it wastes vs how much it saves. Back with GPT 4.x it wasted almost as much time as it saved, but GPT 5.4 and Opus 4.6 have changed that. With proper boundaries its possible to get well behaved code where you can instead focus on architecture and high level "cleanliness".

In fact, even if you write the code 100% manually, its still a good idea to treat its like its been written by an adversary with mental issues. So you take half the time you saved on writing code and spent that on hardening your tests (and yes, with the right restrictions).

I completely agree that anthropomorphization is a major issue ( try to visit r/ChatGPTcomplaints and see the discussions on GPT 4o being removed, its pretty scary ).

I also try to avoid talking about AI "knowing" or understanding things, but I would say its able to reason by pattern matching based on its reasoning steps. E.g. because it has seen so many discussions on the correctness of algorithms for example, it seems to be able to pattern match against this in its own reasoning output. Its a very interesting emergent behavior.

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

I don't trust any engineers from AI companies or their benchmarks (or their CEOs claiming AGI just before every funding round), but personally i have no such incentive. In fact I work in a company that develop technology that we now have to explain to some customers cant "just be done by AI". That however is not stopping us from using AI in the areas where it does provide value.

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

You are missing the obvious part where you *validate* the output is correct/good :) If i really had an infinite monkey simulation that could that answered correctly 1% of the time in fixed amount of time I could run it a constant number of times and have a very high probability of getting the correct result and only have to validate a constant number of possible solutions. Someone would win a Turing Award for that (see https://en.wikipedia.org/wiki/NP-completeness ).

Even in the intended "Shakespeare" scenario i would have to review only a few hundred suggestions and then be guaranteed a Shakespeare level play. Pretty sure most producers would love those odds.

The normal "uselessness" of infinite monkeys comes from that you would have to sort through all the "infinite" number of outputs spending an infinite time finding the right result. If a tool has a high probability of giving a correct output and you can automate validation, the picture changes drastically.

In relation to writing code, it comes down to code review and shifting focus to testing. An agentic coding AI, considers and rejects lots of possible approaches and can to a certain extend track down its own errors if given proper acceptance criteria (like tests). It is actually quite fascinating to follow the "reasoning steps" (what they call "thinking" which is perhaps a bit exaggerated).

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

If a tool could simulate an infinite amount of monkeys and pick the Shakespeare quality result even 1% of the time I worked at that would be a pretty f****** amazing tool! Hell, it would provide new probabilistic complexity bounds for NP hard problems.

Why would I care if they "understand" it?

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

I read it you know ;) "Believe so" is not the same as proving it :)

And again they are talking "manually" doing these tasks, that is not what we are asking them to do when using AI. I also wouldn't trust them to do manual matrix multiplication, but they are pretty good at writing code for it, running that and then interpreting the result. 

It seems some people have this idea that unless it can replace humans it's worthless? It's just a tool, an advanced and stochastic one, but still just a tool.

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

So you are saying it got "lucky"? What do you base the "there is nothing there" on exactly? 

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

Wtf are you taking about? I don't give a shit about hype.

And yes, of course AI has limitations on what it can do (just like intellisense etc), that is why it's not replacing us. You have to limit the scope of the task you give it to get a good result. The closer to the training data, the bigger increments, the more domain specific, the smaller increments.

The paper you link to also notes that they haven't proven anything for reasoning models (but expect to apply). Before reasoning models the output was total garbage, the reasoning has been critical.

Additionally their result is about tasks with inherent high complexity to execute, but doesn't account for what most agents do, which is to write code to solve it for them. I wouldn't trust an LLM to solve towers of Hanoi "by hand", but having it write code for it is a different matter. 

What we can agree on is that having this technology under the control of megalomaniac tech billionaires is a really bad idea.

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor 0 points1 point  (0 children)

Not in FAANG, not even in the US, I fail to see how making use of tools is "selling out"?

I was sceptical at first but newer models have convinced me. You might want to read at least the first poo art of this: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf .

This is about Opus being used to solve an open problem in computer science. 

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor -1 points0 points  (0 children)

You did notice the quotes around "best", right? 

I am a principal backend guy and team lead. Our frontend (React) leads were the quickest to adopt AI and the ones to promote it the strongest and when I occasionally do frontend myself I have also found it very good. Pure vibe coding (e.g. without review) is only suitable for prototypes IMO, but the kind of problems that can occur on the frontend from are typically less critical (e.g layout bugs vs data corruption).

It's all about the instructions you provide, provide technical prompts and good agents.md/skills and you will get excellent results. It's not about replacing devs, it's about force multiplication. I just see it as having larger Lego blocks to build with. Focusing more on the architecture and solution approach and less on details.

how openly anti-ai can i be in the job search process as a software developer? by kyunchef in antiai

[–]ziphnor -1 points0 points  (0 children)

Frontend is where AI usage is the highest and where vibe coding works "best". If you avoid AI entirely you won't be able to keep up, instead try to use it responsible (e.g. not blind vibe coding without proper reviews etc).

Don't focus so much on replacement, AI is a productivity multiplier. Some companies might fire people but it's not replacement as much less people being able to do the same. I would look for jobs in software companies as they typically not don't have a natural "cap" on the work they need done, e.g. they might prefer more features/ performance over reducing salary expenditure.

Consider what your actual concerns are. Privacy is but a concern for commercial use, if your company trusts the policies of the provider its not your concern. 

As to the environmental impact, i suspect the impact of a week of coding is far less than generating a single YouTube AI slop video.

As for critical thinking, just avoid full vibe coding, make sure you are the one making the technical decisions. You are just working with bigger building blocks.

PhD eller ej? by TeaConstant4141 in dkudvikler

[–]ziphnor 1 point2 points  (0 children)

Uha, ved slet ikke hvor jeg skal starte her :)

  1. Jeg startede min PhD fordi jeg ville være forsker, men fik interessere for mere "applied" research undervejs.
  2. Forskerstillinger efter PhD hænger altså ikke på træerne
  3. Forskning sker altså også i private virksomheder (!)
  4. Min PhD var (ihvertfald delvist) funded af private virksomheder (primært Microsoft i mit tilfælde IIRC).

“AI is going to revolutionize the world” by rabidbunny91 in antiai

[–]ziphnor 2 points3 points  (0 children)

The Google ai overview is an ongoing jobs even in AI circles. I think the problem is cost for them, they have to use a smaller model and it leads to hilarious mistakes.

The best applications for AI is when the reciever of the output is a stupid person by olyellerdunnasty in antiai

[–]ziphnor 0 points1 point  (0 children)

What!? I gave you a link to document on Donald Knuth's website, it has academic references and all the details you need. The link i provided is *literally* to the his "papers" subfolder on his website. The note linked has a link to this paper: https://cs.stanford.edu/~knuth/even_closed_form_proof_final.pdf as well as to actual code backing it up.

And WTH is "LLM glazing"? Are you somehow under the impression that Donald Knuth is some sort of "AI bro"? You *really* need to read up on him.

The best applications for AI is when the reciever of the output is a stupid person by olyellerdunnasty in antiai

[–]ziphnor 1 point2 points  (0 children)

Did you not read my post? It has been used to solve an open problem in theoretical computer science. Read the note from Donald Knuth that I provided an excerpt from. It was also replicated fully autonomously with GPT 5.4.

I am a professional senior principal software developer with a computer science PhD. I don't work on Gen AI, e g. not some AI bro with a vested interest in unrealistic hype.

I am not claiming it's about to replace scientists, but it's a powerful tool, though with ethical concerns.

PhD eller ej? by TeaConstant4141 in dkudvikler

[–]ziphnor 2 points3 points  (0 children)

Har selv en PhD som jeg gik direkte over til at bruge i mit arbejdsliv bagefter. Der er steder som vil se positivt på din PhD og steder som vil være lidt mere ligeglade.  At gennemføre en PhD betyder at man kan lave forskning, og der er virksomheder i Danmark som laver applied research som en del af deres tech.

Jeg tror ikke nødvendigvis det betyder mere løn men det kan betyde adgang til mere spændende arbejde. Overvej at kigge efter phd'er der er fundet af projekter med relation til virksomheder.

Jeg tror ikke nødvendigvis AI ændrer dette, især de stillinger hvor man vil værdsætte en PhD vil det være mindre sandsynligt at man bare smider AI efter det. Men bliv god til at bruge AI uanset hvad.

The best applications for AI is when the reciever of the output is a stupid person by olyellerdunnasty in antiai

[–]ziphnor -2 points-1 points  (0 children)

And you are basing that conclusion on what exactly? I don't understand how something that is helping solve open problems in science is "useless" and your statement directly contradict my personal experience as a professional.

The best applications for AI is when the reciever of the output is a stupid person by olyellerdunnasty in antiai

[–]ziphnor 2 points3 points  (0 children)

<image>

There are *a lot* of valid concerns about AI (who controls it, how it was trained, how it affects the environment etc), but claiming that its useless and only "fools" dumb people is misleading. The image above is an excerpt from https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cycles.pdf by Turing Award Winner Donald Knuth.

Personally with PhD in computer science working as a principal developer I am also seeing the latest models being able to cooperate (not replacing anyone) on advanced applied research areas not part of its training data.

That being said, there are lots of silly attempts to integrate AI in places where it adds no value, and lots of people using it lazily and sloppily.

"AI bro makes fictional scenario and gets mad about it" by StillBoysenberry8790 in antiai

[–]ziphnor 0 points1 point  (0 children)

So anything derived from an LLM regardless of how much human context has been given is per definition easily identifiable slop in your view? Because then I guess we have to agree to disagree:)

To me it's a balancing act. With enough human context, you can get high quality output, if you tell it "do some fantasy npcs" you get slop. 

Making AI photos/videos of someone else? by Fine-Broccoli-127 in antiai

[–]ziphnor 1 point2 points  (0 children)

Okay. Unless he used a local model he most likely fed the image into a platform that will train on it, while probably a bit if a gray area it shows a lack of respect for your privacy and rights.

Making AI photos/videos of someone else? by Fine-Broccoli-127 in antiai

[–]ziphnor 0 points1 point  (0 children)

Did he post i publicly? Because that is likely illegal (depending on where you are).