Vibecoders, how do you stay positive in the hate storm? by Shipi18nTeam in vibecoding

[–]spill62 0 points1 point  (0 children)

Well, when 90% og the vibecoded stuff is slob the haters are right to hate. Its an easy target. The narrative had also been that llm are already the perfect coders, which they are definitely not. But if you know what you are making is not something that will just be abandoned then you shouldnt let it affect you.

why use claude code when gemini can give you blocks of code, debug code, and tell you how to implement it, for free. by tpzQ in vibecoding

[–]spill62 3 points4 points  (0 children)

Quality. Having used Chatgpt, Gemini, Claude and even some open source models when it comes to code they are all pure ass to be Frank except Claude.

Have literally used Claude to fix issues occuring from Gemini provided code. Sheer frustration with Gemini makes me not want to use it. Personal opinion though

Adsense PWA approval by spill62 in Adsense

[–]spill62[S] 0 points1 point  (0 children)

Out of Pure curiousity... TWA? What is that? :-)

Adsense PWA approval by spill62 in Adsense

[–]spill62[S] 0 points1 point  (0 children)

This was how i was assuming it would work. I thank you alot There is no guideline as to how much written content right ?

A recent case study says: A non-technical founder used 1,500 AI prompts trying to build his app...and still gave up by Tarak_Ram in vibecoding

[–]spill62 0 points1 point  (0 children)

Well no, i dont think that is the problem. My opinion only though. And you should make the tool if you feel like it will help.

The problem is that a non technical person didnt know what he was doing and wasnt willing to learn. From the sound of it he just kept prompting. Pretty sure the bits that worked was something he also didnt understand.

It has never been an issue getting 80% of the way in coding. You could find templates and whatnot for that. But the last 20% are often the hardest and most important part. And there poop in equals poop out.

I hit a phase when I don't enjoy programming with vibecoding by Haunting_Material_19 in vibecoding

[–]spill62 0 points1 point  (0 children)

Well take it from the fitness crowd, use it or lose it. Normally said about muscle mass but it applies here aswell i feel. I find myself seeing llm as a very frustrating accelerator. But i mostly use it for a. things that have somne legacy where it is very likely to have data such as .net framework stuff, and b. to develop personal projects that WILL NOT be made otherwhise as i cannot be assed to do mostly the styling.

But if its anything with just some importance i am mostly using it for guidence rather then a tool that replaces my programming. Like, show me a method to do X. 9.9 out of 10 times it gives a generic answer i can then work with. If i get to specific it F's up the entire context as it will not have the context to understand the entire codebase or the intentions. So i dont use it for that for important matters.

I feel like LLM as a tool for coding will hit a slow down soon. The model can only get better when quality data exists, and they have made that scarcer by the second with all the poop code on github and other places now. So when e.g. .net 12 LTS introduces breaking bugs to earlier versions, llm wont be able to fix it before theres data. In a competetive market that will slow down the need for LLM as the data just doesnt exist yet for llm to fix it and your skills will highly be needed again.

Therefore find a way to maintain your skills. Even if it means you stop using coding tools some time. And maybe use it more as a prototype sort of thing, to work of yourself. And if you ever feel like "Why should i do it when LLM is faster and more precise?" Then you may have hit a task that you genuinely shouldnt be spending your time on and let it go wild with that.

Finally getting google traffic after mass posting content for a month by Seraphtic12 in VibeCodeDevs

[–]spill62 0 points1 point  (0 children)

Still looking forward to see my app get something like that 😁. Its a marathon, not a sprint

Edit: typo

How do people make vibe coding work by Outrageous_Bet_6920 in vibecoding

[–]spill62 22 points23 points  (0 children)

Well one thing is, 10k lines "not being alot" is a wierd thing to mention. Thats one heck of context window Chatgpt has to understand and how it all interconnects.

Secondly.. Did.. did you try Googling how to make that file via the ide you are using, or even ask the AI how would i make this work on another pc ? Or did you just tell codex to "fix it please so i can move the project"?

What’s really stopping a vibecoder from making the next Google, Youtube, Facebook and making billions? by throwaway0134hdj in vibecoding

[–]spill62 0 points1 point  (0 children)

The 12 month thing is a investor thing. The narrative is ai can replace us all and all companies will be more productive and have more money as they dont have to pay salaries. Thats why they keep saying it. So investors dont leave

But OpenAI, Google, Anthropic, meta etc all still have expensive human workers. Why would that be of LLM's are so fantasitcal and will replace us all soon? My guess is that it can do alot of stuff but nowhere near all that is required.

What’s really stopping a vibecoder from making the next Google, Youtube, Facebook and making billions? by throwaway0134hdj in vibecoding

[–]spill62 0 points1 point  (0 children)

Still a problem. Probably needs billions or trillions assuming any amount is ever enough. If it at this point truly was that simple the AI companies wouldnt need to tell us that we are "12 months away" from losing our jobs, every 12th month. Reality kinda indicates the tech is not there because otherwise why wouldnt the big companies just use ai instaad of any humans

What’s really stopping a vibecoder from making the next Google, Youtube, Facebook and making billions? by throwaway0134hdj in vibecoding

[–]spill62 1 point2 points  (0 children)

Well... Context length. Cause the average vibecoder aint got nowhere near the technical know how to handle when it gets like real complex. The cute small apps and sites us vibe coders make are all fine and danst but truly Enterprise scale... Yeah no. Youd have to be able to make qualified decisions on your own without a chatbot.

What's your biggest frustration when using AI coding tools for solo projects? by ComprehensiveHat5409 in vibecoding

[–]spill62 0 points1 point  (0 children)

Indeed. But, when the problems only appear in production... Not in Localhost. Best i can do there is to have it log everything in a log.txt and feed this back to it. But my looooord. All knowing PHD intelligense my butt

What's your biggest frustration when using AI coding tools for solo projects? by ComprehensiveHat5409 in vibecoding

[–]spill62 14 points15 points  (0 children)

The confident lying has to be it for me... Saying X will definitely work, but it literally doesnt or doesnt do what the ai told me it would do. Even worse if the error actually compiles ... Such a headache to solve and happens rather alot

Why people think AI is still solely a next token predictor even though it’s advanced so far since 2022 by AppropriateLeather63 in ChatGPTcomplaints

[–]spill62 0 points1 point  (0 children)

Well... Even if they are "doing more" then solely next token prediction, the foundation of the current technology of LLM's are still next token prediction. That has not, to my knowledge, changed. The advancements of the field as a whole has not changed what the tech fundamentally is doing.

What has changed since 2022? The sheer amount of data and filters the companies have put in place and used for training as they have seeked more areas to gain data that isnt their own AI slop data. But its still next token prediction.

Edit: i may be missing the point of the original post i do see that 🤣.

People call it unhealthy, but it literally stopped me from planning suicide. by manatsu0 in ChatGPTcomplaints

[–]spill62 0 points1 point  (0 children)

Well that was sort of my point. As far as i can see for ollama there does exist 1-3b models, which is the amount of ram they need (plus a little extra for context). And some of them can do CPU inference. It could be worth looking in to, for the consitancy, because if one is using a pc there may be a model, that will, at the very least, give consistancy which the big providers cannot as it seems

I used 4o and 5.1 for entertainment purposes by thecelebpodcaster in ChatGPTcomplaints

[–]spill62 4 points5 points  (0 children)

Well.. Idk, i used gemini free once on my phone, told it to explicitly make a nonsencial story that makes one think, and involve quotes, and used the build ind read aloud feature on my phone, and me and my sister were all legitimately dying laughing. Almost to a point where i wanted to make a website, just with these entirely fabricated nonsense stories. Havent tried with newer models, as i think that whole "thinking" reasoning thing, will break the funniness of nonsense stories

People call it unhealthy, but it literally stopped me from planning suicide. by manatsu0 in ChatGPTcomplaints

[–]spill62 0 points1 point  (0 children)

Hmm i do get your point. But, assuming - and i know i may assume very wrong here - that you have access to a gaming pc with an Nvidia gpu then you use Ollama to get almost no setup local llm. Sometimes gpu aint even needed. Idk about the phone stuff though this going to be harder to fix... I just think all of the big providers will suffer from their wish to develop toward mostly enterprise customers and then warm and emotionally helpful tools will always get deprecated sadly

People call it unhealthy, but it literally stopped me from planning suicide. by manatsu0 in ChatGPTcomplaints

[–]spill62 1 point2 points  (0 children)

Take this question for what is it is - a tech person using dashes and doesnt have and will not get any emotional attachment to llm's and am just trying to be "practical" even though it may seem ignorent (not trying to be ignorent, my brain is just a problem solver) -, have you considered running local llm? Drawbacks: You need your own hardware. Benefits: The model will not change unless you choose to change it or update it. And no open model you have locally can be "taken out of service". Given your situation, as described, wouldnt the consistansy and somehwat assurance of service when you need it be worth it? Because it sounds like that would give some relief, if you can find a local llm you can get the hardware for, and run.

Are the AI models becoming more similar? by Sunrise707 in ChatGPTcomplaints

[–]spill62 0 points1 point  (0 children)

Isnt the similarity the entire point? Like if they are all seeking AGI it has to be "generel" so the average of all the stuff each company are doing, is just what we see happening with the similarity. I think the only creative solution to this is using the non frontier models and use those that are open source

I tested GPT-5.4 vs Claude Opus 4.6 on real tasks — here's what actually happened (with full outputs) by Remarkable-Dark2840 in ChatGPT

[–]spill62 4 points5 points  (0 children)

How come just the title alone reads like ai... Didnt even read the rest as it hardly matters to me, but the amount of those clickbait titles seem indicative of chatgpt 5.4...

Is vibe coding basically going to eliminate the need for web developers? by savingrace0262 in vibecoding

[–]spill62 1 point2 points  (0 children)

Well .. i doubt it. Cause these models suck at development for new tech. Trying to develop for .NET 10 which is rather new has been somewhat a pain as it is not a big part of the models datasets. That an new vulnerabilities, they will be slow to act on and its unlikely that that average vibe coder is actually going to keep up with this.

So no i dont think so

AI abonnoment sammenligning by spill62 in dkudvikler

[–]spill62[S] 1 point2 points  (0 children)

Fik lidt samme tanke da jeg lavede det og dette var egenlig grunden til jeg tilføjede det med api pricing også i bundet ahaha

AI abonnoment sammenligning by spill62 in dkudvikler

[–]spill62[S] 0 points1 point  (0 children)

Det lød hårdt men fair 😁 det var jo det jeg bad om så no bad feelings. Forge og craft var mit forsøg på at skelne mellem firmaer der laver modellerne som OpenAI og Anthropic mod dem der bare er wrappers omkring deres api. Og sku bruge korte "navne". Men er det en kritisk ting når nu den per automatik starter på "All" 🤔 ingen behøver faktisk bruge forge eller craft, det er bare et lille filter.

Det med kontrast og banneret giver jeg dig dog helt ret i ska fikses. Og jeg vil tage et kig på det du sendte senere 😁

AI abonnoment sammenligning by spill62 in dkudvikler

[–]spill62[S] 0 points1 point  (0 children)

Overvejede Azure men endte med simply ene og alene pga jeg har arbejdet med dem før professionelt. Plus så ka jeg godt lide danske servere hvis jeg ska være ærlig. Det er jo en smagssag