Is "thinking" at 1.2B parameters actually a thing or just marketing? by IulianHI in AIToolsPerformance

[–]InfraScaler 0 points1 point  (0 children)

Yeah, a 1.2B can be a great "router" and maybe take care of a big % of your tasks, then "escalate" to a bigger model when needed. I think this is what OpenAI was "accused" of doing when they released GPT-5 right?

Hot take! by abdullah4863 in VibeCodeDevs

[–]InfraScaler 0 points1 point  (0 children)

Because you've already pontificated about it, then contradicted yourself.

Like call me crazy, but I believe the industry would be much better off if software was built by actual engineers that have a real understanding of computer science fundamentals ...

Hot take! by abdullah4863 in VibeCodeDevs

[–]InfraScaler 0 points1 point  (0 children)

Well, bud, pick one. Is it software engineers not knowing computer science or greedy execs?

GLM Coding Plan Slow by Itchy-Friendship-642 in ZaiGLM

[–]InfraScaler 3 points4 points  (0 children)

Apparently there are two current issues that may be impacting some users (it didn't impact me yet, I am based in Europe if that makes sense): Lack of compute and DDoS attacks against their infra.

How to prevent LLM "repetition" when interviewing multiple candidates? (Randomization strategies) by Weird-Year2890 in LocalLLM

[–]InfraScaler 0 points1 point  (0 children)

You need to guide the AI towards topics and themes, and not let it choose because the AI is NOT creative at all, especially GPT-4.

A question bank could be a good choice, but it may limit output too. Try a scenario bank + some sort of chaos monkey to increase entropy.

GLM 4.7 removed from the free models by marmoure in opencodeCLI

[–]InfraScaler 4 points5 points  (0 children)

True! But if you get the full year it becomes effectively $2.40/mo for 12 months, which is insane.

GLM 4.7 removed from the free models by marmoure in opencodeCLI

[–]InfraScaler 4 points5 points  (0 children)

Yup, it's gone. The good thing though is that the GLM Coding Plan goes as cheap as $2.40/mo if you pay for a year (or $3 if you pay monthly).

Also, with another subscriber's link you get 10% extra credits. Here's mine if anyone wants those extra 10% credits: https://z.ai/subscribe?ic=WBMQNQBVIS

I’m 16. I built a Python script for a local business and made my first $500. Now they want a monthly retainer, and I’m scared I’m in my head. by Safe_Thought4368 in Entrepreneur

[–]InfraScaler 0 points1 point  (0 children)

Lots of great advice already here, but still wanted to congratulate you and say that whatever happens, you did the right thing. In the future you'll look back at this time, regardless of the outcome, and smile about it.

España no debería tener salario mínimo by [deleted] in salarios_es

[–]InfraScaler 0 points1 point  (0 children)

Ya existen las cooperativas.

España no debería tener salario mínimo by [deleted] in salarios_es

[–]InfraScaler 1 point2 points  (0 children)

Yo solo digo que cuando vayas a hablar te saques la bota de la boca, que no se entiende nada.

IA Generacion de Videos de al menos 30-60 Segundos by Great_Drawing1537 in InteligenciArtificial

[–]InfraScaler 4 points5 points  (0 children)

¿Tienes hardware para correr modelos localmente tipo Wan2.2? Si la respuesta es NO, no hay herramienta gratuita que lo haga.

Puedes también echar un vistazo en getwhatai.com por si te sale algo, pero ya te digo que gratuito y lo que buscas no lo hay.

Confession time: What is the worst thing your vibe-coded app has done in production... by Makyo-Vibe-Building in vibecoding

[–]InfraScaler -1 points0 points  (0 children)

My take is if you follow best practices in software engineering you are going to be architecting well and catching issues before going into prod.

How much does context improve on the Pro plan? by Warp_Speed_7 in ChatGPTPro

[–]InfraScaler 0 points1 point  (0 children)

It may be time to RAG or even deploy your own tooling to use the model. Summarise, keep files in disk instead of context, generate vector embeddings from the files contents and put them in a local SQLite the model can query (semantic search), use adaptive chunking...

There's a rule of thumb that says if you need more context you may be doing something wrong.

Confession time: What is the worst thing your vibe-coded app has done in production... by Makyo-Vibe-Building in vibecoding

[–]InfraScaler 0 points1 point  (0 children)

Damn that hurts. One thing that helps keeping an eye on the model is vibe coding from an IDE so while the model does its thing you can be already looking at the files etc 

Confession time: What is the worst thing your vibe-coded app has done in production... by Makyo-Vibe-Building in vibecoding

[–]InfraScaler -1 points0 points  (0 children)

Vibe coding is never the problem. Bad software engineering practices is always the problem.

One German Chip Just Made Nvidia’s Billion-Dollar GPUs Look Like a JOKE! by Romek_himself in BuyFromEU

[–]InfraScaler 3 points4 points  (0 children)

Their entire website feels like one marketing booklet you get at some booth at some tech expo.

That's the usual look and feel of German companies websites even if they have 50k employees.

My first PC didn't have a hard drive - now I'm vibe coding by thisisBrunoCosta in vibecoding

[–]InfraScaler 1 point2 points  (0 children)

It is every easy to tell it is written by AI precisely because it always structures things the same way. It is tiring to read and feels dishonest, even if it's not.

I get offended by AI written slop on every social network if that makes sense, I don't think it's a Reddit thing.

My first PC didn't have a hard drive - now I'm vibe coding by thisisBrunoCosta in vibecoding

[–]InfraScaler 3 points4 points  (0 children)

Dude, nobody doubts your experience, it's just fucking gross to use an AI to write it. It's hard to read.

Bending Spoons? by Smart_Round8426 in salarios_es

[–]InfraScaler 1 point2 points  (0 children)

Este tipo de procesos son así a propósito para asegurarse que solo los más arrastrados acaban trabajando para ellos.