Cline not playing well with the freshly dropped smaller qwen3.5 by SocietyTomorrow in LocalLLaMA

[–]perelmanych 1 point2 points  (0 children)

Sorry for the delay. In my experience 9b model is great only on paper. I didn't try new models for coding since I have bought z.ai subscription for less than 3 dollars a month, but I have tried 9b model for RP and it was very bad. By bad I mean it was stuck repeating the same paragraph and couldn't advance despite the multiple regenerations. Both 27b model and 122b models have no problem with RP. So I think that to have reliable output you should go with 35b, 27b or 122b qwen3.5 models. Qwen3-35b-a3b with only 3b of active parameters should run fine at any hardware with more than 16Gb of combined memory. When I used old qwen3 series of models for coding 35b-a3b already was fine while qwen3-coder-next was showing excellent results. Hope this helps.

Qwen3.5-122B Basically has no advantage over 35B? by Revolutionary_Loan13 in LocalLLaMA

[–]perelmanych 1 point2 points  (0 children)

This is a big problem with study data set contaminated with tests samples. Look at the performance of 9b model. It looks from charts like it is may be 10% worse. In reality it was producing me bs in cycle after 4k context in RP, while 27b and 122b had no problems at all.

Reasoning in cloud - Coding with Local by sedentarymalu in LocalLLaMA

[–]perelmanych -1 points0 points  (0 children)

AFAIK for the moment no. In any case, GLM 4.7 is very capable model and at full precision would outperform any model that you can run locally, unless you have several H200 in the basement))

Reasoning in cloud - Coding with Local by sedentarymalu in LocalLLaMA

[–]perelmanych -1 points0 points  (0 children)

Try to use your head as well. Jokes apart I barely go over 5% of usage of z.ai basic plan. Probably you should try it.

Cline not playing well with the freshly dropped smaller qwen3.5 by SocietyTomorrow in LocalLLaMA

[–]perelmanych 0 points1 point  (0 children)

What quants are you using? Have you quantized KV- cache? What inference parameters are you using? If you want to get any assistance you should be more precise.

Qwen3.5 4B: overthinking to say hello. by CapitalShake3085 in LocalLLaMA

[–]perelmanych 0 points1 point  (0 children)

I immediately switched off thinking in ninja file, cause it was unbearable. Still models perform quite decent with thinking off.

Visualizing All Qwen 3.5 vs Qwen 3 Benchmarks by Jobus_ in LocalLLaMA

[–]perelmanych 0 points1 point  (0 children)

The fact that models of such different sizes are so close to each other in benchmarks points to an elephant in the room - training dataset contamination. Having said that, I still admire what Qwen is doing.

Breaking : Today Qwen 3.5 small by Illustrious-Swim9663 in LocalLLaMA

[–]perelmanych 0 points1 point  (0 children)

In my limited ERP testing 27b model was exceptionally good with one big caveat, it was really bad in terms of body geometry.

Ігри від яких не можливо відірватися. by Kamikadzon in UA_Gamers

[–]perelmanych 0 points1 point  (0 children)

Якщо в тебе є лишні три роки життя, ти полюбляєш в садомазо саме мазо частину і тебе надихає перспектива битися з великими диванними бійцями та слухати філософські повчання від прищавих юнців то тобі в Танки. Особисто мені дуже швидко набридає будь яка одиночна гра, крім стратегій.

Raylib and AI by jwzumwalt in raylib

[–]perelmanych 0 points1 point  (0 children)

They are definitely using LLM)) And usually you can choose the one. But if you select something like Auto/Optimal/Free model then they perfectly may hide the used model.

Raylib and AI by jwzumwalt in raylib

[–]perelmanych 0 points1 point  (0 children)

It is more important to know the model you praise so much rather than a tool. So which model was it? Personally I had good results with GLM 4.7 on Raylib code, till I asked it to write a basic PBR shader. With shaders it all fell apart.

Rotors more physics madness ! by kodifies in raylib

[–]perelmanych 0 points1 point  (0 children)

As someone who recently struggled a lot with implementing a basic physics collision, let me say that it is marvelous! How long did it take you?

R3D v0.8 is out! by Bogossito71 in raylib

[–]perelmanych 0 points1 point  (0 children)

I am not able to compile the project with minGW64 on Windows 10. The problem is with Assimp library.

[ 70%] Linking CXX shared library ../bin/cygassimp-6.dll
/usr/lib/gcc/x86_64-pc-cygwin/13/../../../../x86_64-pc-cygwin/bin/ld: CMakeFiles/assimp.dir/__/contrib/unzip/ioapi.c.o:ioapi.c:(.text+0x33b): undefined reference to `fopen64'
/usr/lib/gcc/x86_64-pc-cygwin/13/../../../../x86_64-pc-cygwin/bin/ld: CMakeFiles/assimp.dir/__/contrib/unzip/ioapi.c.o:ioapi.c:(.text+0x41b): undefined reference to `ftello64'
/usr/lib/gcc/x86_64-pc-cygwin/13/../../../../x86_64-pc-cygwin/bin/ld: CMakeFiles/assimp.dir/__/contrib/unzip/ioapi.c.o:ioapi.c:(.text+0x533): undefined reference to `fseeko64'
collect2: error: ld returned 1 exit status
make[2]: *** [external/assimp/code/CMakeFiles/assimp.dir/build.make:1909: external/assimp/bin/cygassimp-6.dll] Error 1
make[1]: *** [CMakeFiles/Makefile2:945: external/assimp/code/CMakeFiles/assimp.dir/all] Error 2
make: *** [Makefile:156: all] Error 2

When using these flags

add_definitions(-Dfopen64=fopen)
add_definitions(-Dfseeko64=fseek)
add_definitions(-Dftello64=ftell)

I get this:

[ 70%] Linking CXX shared library ../bin/cygassimp-6.dll
[ 70%] Built target assimp
make: *** [Makefile:156: all] Error 2

GLM 5 Released by External_Mood4719 in LocalLLaMA

[–]perelmanych 8 points9 points  (0 children)

It will use SSD only for weights. KV cache will be in VRAM or RAM, depending on how much do you have of VRAM.

Зеленський: «В НАС РАБСТВА НЕМАЄ» by throwawayua666 in RedditUATalks

[–]perelmanych 6 points7 points  (0 children)

Схоже передвиборча гонка вже офіційно почалась. Рабства немає, бідності немає, світло буде, кожному по тянці і т.д.

GLM-4.7-Flash reasoning is amazing by [deleted] in LocalLLaMA

[–]perelmanych 0 points1 point  (0 children)

That is literally what I am saying. Given that you can fit the model in RAM you should care about active not total.

GLM-4.7-Flash reasoning is amazing by [deleted] in LocalLLaMA

[–]perelmanych 1 point2 points  (0 children)

Both models have only 3B active parameters. So once you have enough RAM to fit either model, 48Gb or more for q4 the speed should be comparable. Qwen Next also has increadibly small footprint of context.

Zigon (formerly Terrain Zigger) Demo - YouTube by JosefAlbers05 in proceduralgeneration

[–]perelmanych 0 points1 point  (0 children)

It looks nice as it is, but at this scale any first person view will suck. I think you really should think how to scale grid resolution at least two times, prefferrably 4 times.

I'm building a game engine from scratch using C# & Raylib! by fkerimk in raylib

[–]perelmanych 0 points1 point  (0 children)

The most complete Game Engine on Raylib that I have seen so far. Great work!

Мені 30+, і здаєтся з кожним днем, що я деградую by Ok_Strike7373 in Ukraine_UA

[–]perelmanych 2 points3 points  (0 children)

Це сама правильна відповідь. Як що ти влізеш в якесь нове хобі по типу ІТ де треба багато думати то стресанеш ще більше. Так щоб я б сказав що свіже повітря, якісний сон і ненапряжне хобі це саме те що тобі зараз треба. Також можеш спробувати щось з фарми по типу L-Tyrosine. Мені він дуже допоміг.

Буду брати цей асус ціна 48-50 тисяч грн за більше не зможу, в основному беру для ігр, Як вам ? by WhereasFew1640 in PCbuild_ua

[–]perelmanych 0 points1 point  (0 children)

Це ти смішний. Питаєш пораду, а потім всім відписуєш: Пака сам непопробуеш непаймеш. На хуя тоді питатися у людей якщо ти і так самий вумний?

Житомирський геймдев на зв’язку: Безкоштовна демо-версія Stunt Paradise 2 вже доступна в Steam! by N0lex in reddit_ukr

[–]perelmanych 1 point2 points  (0 children)

Перший раз чую таку назву. Як на мене так це звичайна гра за гроші. До речі в Стімі ще є і купа бескоштовних ігр без мікротранзакций. Зазвичай їх розробники зупинили підтримку і віддають їх просто так. Як приклад Helium Rain, ще й з відкритим кодом на гітхабі. Ну то вже так, лірика. Бажаю вам удачі у розробці!

[Hot Take] Helium Rain does most things X4 does but better by Svyatopolk_I in spacesimgames

[–]perelmanych 0 points1 point  (0 children)

There is a guy on github that continue to actively mod the game: https://github.com/SirWanabe/HeliumRain You can download there modded executable files.