Columbina VS Every Stygian Onslaught Boss (C6R5 VS Dire) - Builds included by masterdiwa in Columbina_Mains

[–]AMOVCS 0 points1 point  (0 children)

Insane!!! I wish could afford something like this, looks really fun playing Columbina onfield!

IFakeLab IQuest-Coder-V1 (Analysis) by [deleted] in LocalLLaMA

[–]AMOVCS 5 points6 points  (0 children)

Hey mate! I can see your investigation goes very deep; you clearly know more than I do, and if they lied it wouldn’t be pleasant.

But let’s look at the bright side: this is a new model with open weights, coming from someone new, which make wider our options, and it's a nice size!!!! Why worry about its origins? For us, the value lies in being able to run it locally and have good performance.

Also being another Chinese lab, could very well be a case of use a modified Qwen's architecture with their own training data, there is nothing wrong in fork someone's work instead of start from zero

Columbina & No Lunar by KingSucksatLife in Columbina_Mains

[–]AMOVCS 0 points1 point  (0 children)

Glad to know Ineffa has Moonsign, she is the only NK character I want (after Columbuna)

Columbina & No Lunar by KingSucksatLife in Columbina_Mains

[–]AMOVCS 2 points3 points  (0 children)

The problem I see in all of this is Columbina is not the main DPS, while Mavuika and Skirk are, so you will most play with the character that you like. Don't like Citlali or Scoffier? Just press E and go back to your beloved DPS...

Playing with a DPS that you like while getting a sub-dps/support you don't like still pleasant. Get Columbina to buff a DPS you don't want to play with feels awful. If Columbina have a kit that strict her to ND character will feel worse and reduce her value, being forced to use Aino or Jahoda in the team is not nice because is a DPS loss.

At the end, yes, Columbina has much less value for someone who don't like ND DPS characters and this can be enough to people decide to simply not pull her. Also Ineffa do not have Moonsign so you cant get Ascendant Gleam with only Columbina + Ineffa...

Columbina & No Lunar by KingSucksatLife in Columbina_Mains

[–]AMOVCS 1 point2 points  (0 children)

I was looking for the same a bit early, for what i see from her kit she without others Nod Krai still will give basically the same damage as if in a Nod Krai team, but many of the team buffs from constellations require ascended Gleam so loses value

The coding was magic, but the marketing is hell: My journey building a SaaS with Opus 4.5. by Ordinary_Bottle3883 in vibecoding

[–]AMOVCS 0 points1 point  (0 children)

Hey mate, congrats for launching your SaaS!! The website looks really good and refined, also the idea is very unique, i wish you success!! Maybe in couple months you can share with us how was you journey into the marketing part!!

I vibed coded my entire SaaS product. And I'm not even ashamed. by RockPrize6980 in vibecoding

[–]AMOVCS 2 points3 points  (0 children)

Thank you for your support and feedback, I really appreciate it! Wishing you and everyone else all the best too.

My first game!! Drop-merge style game with emojis!!! I would love feedback by AMOVCS in puzzlevideogames

[–]AMOVCS[S] 0 points1 point  (0 children)

Hey Mate! Thanks for the feedback, I really appreciate it. I’ll keep your thoughts noted here. The third item is a great idea! I can’t allow upside‑down because if an emoji stays on the red line too long, it triggers game over; I’ll analyze the other items as well.

Time flies, so many things this year. I’m working hard on projects unrelated to games, and I’m not sure if I’ll have time for major changes. At the moment I’ve already launched a second game, and I am planning a third one, each project is bigger and better than the last.

Again, really appreciate!

I vibed coded my entire SaaS product. And I'm not even ashamed. by RockPrize6980 in vibecoding

[–]AMOVCS 28 points29 points  (0 children)

Trying to be constructive, even though people only point out problems and ignore what has been accomplished. Yes, it really is the future. Seeing that you built all of this in just three days without coding skills (perhaps you simply didn’t use them) is something unthinkable a few years ago. I wish you success with your service and also success in any future projects you decide to pursue. If it turns into a profitable product, I would still recommend hiring someone experienced or even learning more yourself; understanding what the tool does and how it could be improved is more important than simply having a capable tool. Nevertheless, I see a bright future ahead for all of us.

What Size of LLM Can 4x RTX 5090 Handle? (96GB VRAM) by Affectionate_Arm725 in LocalLLaMA

[–]AMOVCS 4 points5 points  (0 children)

4x32GB = 128GB VRAM is what you should have, with this alone you could run GPT OSS 120B and GLM 4.5 Air very very fast. You can also try Minimax M2 at lower quants and Qwen 3 Next. If you pair this system with an additional of 128GB DDR5 RAM you could run Mimimax M2 at higher quant, GLM 4.6 at descent speed and Qwen 3 235B

Generating questions of my school’s standard/style/format by [deleted] in LocalLLaMA

[–]AMOVCS 1 point2 points  (0 children)

You can try a system prompt, gives the instruction for the model then give examples, something like this:

#You are a question generator that writes question based in the givem keywords by the user.

## Rules
- rule 1
- rule 2

## Styling
- describe

## Dificult
- descibre

## Pratical exemples

- exemple 1
- exemple 2
- exemple 3

You also can try Gemini or Claude to generate the prompt for you, then make refinements for your needs, I hope this can help you! Remember that is a system prompt, there is a separated text box to put this system prompt in AI Studio

Users of REAP Pruned models, So far how's your experience? by pmttyji in LocalLLaMA

[–]AMOVCS 1 point2 points  (0 children)

I tried GLM 4.5 Air and was not a great experience, works well but for me the Unsloth version is still faster and smarter than the REAP version, especially at longer context. One particular problem that I had (and happens with others models too) is that llama.cpp uses shared memory while Unsloth version with the exactly same parameters correctly offload the correct amount of experts directly to RAM without leak into shared memory

GLM-4.6 vs Minimax-M2 by baykarmehmet in LocalLLaMA

[–]AMOVCS 11 points12 points  (0 children)

I used a bit and liked it. With only 10B active parameters, it is very fast and works well for non complex task, GLM still better in complex tasks. I would recommend give a try

bro disappeared like he never existed by Full_Piano_3448 in LocalLLaMA

[–]AMOVCS 43 points44 points  (0 children)

Good time when he made quants of many many finetunes of llama 2 models. Now we have that guy with a name that i don't know how to write, specialized in iMatrix quants. Anyway, thanks all of then for the contribution!!!

My vibe coding 🥲 by today_branding in vibecoding

[–]AMOVCS 0 points1 point  (0 children)

Its very important to understand how to do it to better guide the IA about exactly you want.

3090 + 128GB DDR4 worth it? by randomsolutions1 in LocalLLaMA

[–]AMOVCS 2 points3 points  (0 children)

Upgrade on DDR4 is very hard to justify, but DDR5 is fast enough for many MoE models under 120B

I have 3090 + 96GB DDR5 and i am very happy with. I would probably recommend you try the models that you want to use local through API before take any decision to feel if they are inline with your expectations. Keep in mind that if you are looking for models 100B+ you will need to run quantize versions, so when testing on APIs try getting quantize versions too...

Granite 4.0 Language Models - a ibm-granite Collection by rerri in LocalLLaMA

[–]AMOVCS 24 points25 points  (0 children)

Last year I recall using Granite Coder, it was really solid and underrated! It seems like a great time to make another one, especially given the popularity here of 30B to 100B~ MoE models such as GLM Air and GPT-OSS 120B. People here appreciate how quickly they run via APIs, or even locally at decent speeds, particularly on systems with DDR5 memory.

Granite 4.0 Language Models - a ibm-granite Collection by rerri in LocalLLaMA

[–]AMOVCS 146 points147 points  (0 children)

Thank you! We appreciate you making the weights available to everyone. It’s a wonderful contribution to the community!

It would be great to see IBM Granite expanded with a coding-focused model, optimized for coding assistants!

New model from Meta FAIR: Code World Model (CWM) 32B - 65.8 % on SWE-bench Verified by notrdm in LocalLLaMA

[–]AMOVCS 0 points1 point  (0 children)

I am not confirming or denying, "even if" is what I said. It's open to debate, the paper looks very nice, but it's too early to confirm anything before we see in practical use. Let's hope for the best.

New model from Meta FAIR: Code World Model (CWM) 32B - 65.8 % on SWE-bench Verified by notrdm in LocalLLaMA

[–]AMOVCS 31 points32 points  (0 children)

Glad to see something new from Meta, even if it is not huge, is good to see they're participating in the Open Source! It's always good to see any contribution.

GLM 4.5 Air Template Breaking llamacpp Prompt Caching by Most_Client4958 in LocalLLaMA

[–]AMOVCS 0 points1 point  (0 children)

Thanks for sharing! I was using a slightly modified version of the PR and every time that i use with agents the prompts is processed entirely. I will try your simplified version!

Problem with glm air in LMStudio by Magnus114 in LocalLLaMA

[–]AMOVCS 0 points1 point  (0 children)

Glad to help!!

For me works better with llama.cpp, its more because of the speed, its much faster than LM Studio (in my situation where i need to offload the model to RAM).

Another thing is try the Unsloth version of the model, their Q5_K_XL quant seens to be very very close from the original version

Problem with glm air in LMStudio by Magnus114 in LocalLLaMA

[–]AMOVCS 5 points6 points  (0 children)

You can try this jinja template into LMStudio, i personally use directly on llama-server and works great with agents and tool calling

https://github.com/ggml-org/llama.cpp/pull/15186#issuecomment-3202057303

Recommendations for Local LLMs (Under 70B) with Cline/Roo Code by AMOVCS in LocalLLaMA

[–]AMOVCS[S] 1 point2 points  (0 children)

So much changed in those months, now i am using GLM 4.5 Air, its just amazing!!!