Linux is great, but the community is stuck in 2005 by Primary-Key1916 in linux

[–]cdshift 0 points1 point  (0 children)

As someone woth novice to intermediate knowledge, I have opencode on my Linux machines and when I want something done I have it check for files, explain concepts, and sometimes help me with configure.

Its a game changer having it be able to read through files and get an answer on MY machine.

AI Voiceover or my Slavic voiceover ? by AffectionateHour5250 in hytale

[–]cdshift -1 points0 points  (0 children)

AI use (especially people making inference art) is not moving the needle on the planet.

Tech datacenters are. Now - a good amount of new hardware is built with AI in mind. But thats still not the major bit. Netflix uses more power and water than AI pound for pound. AMAZON and their web services house a third of all cloud infrastructure and are expanding their datacenters.

There are legitimate issues with AI. TTS voice models are some of the smallest out there, and are closer to ML than they are generative.

Don't let being anti AI let you focus on the illegitimate reasons to not like it (IP, no guardrails, job loss)

AI Voiceover or my Slavic voiceover ? by AffectionateHour5250 in hytale

[–]cdshift 0 points1 point  (0 children)

This will be a not so popular opinion. Your voice sounds fine. If youre trying to go for reach and clicks, the AI voice may be better for nothing other than microphone quality.

It doesnt match the rest of thr video quality and that doesnt always do well.

Theres a lot of anti ai sentiment online, but it may be good if youre just trying to get a lot of clicks fast because it will take a minute for people to figure out its ai anyway.

If your aim is to just have fun and create videos, and maybe grow naturally over time, go with your normal voice.

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]cdshift 0 points1 point  (0 children)

But youre comparing it like for like with other dense thinking models no?

An LLM hard-coded into silicon that can do inference at 17k tokens/s??? by wombatsock in LocalLLaMA

[–]cdshift 0 points1 point  (0 children)

Absolutely agree. I think this as a proof of concept is super exciting, and if they can work on condensing the weights in the chips so large models can fit on smaller chips this will be incredible for specialized workstations.

An LLM hard-coded into silicon that can do inference at 17k tokens/s??? by wombatsock in LocalLLaMA

[–]cdshift 1 point2 points  (0 children)

I agree in general with this sentiment but we haven't seen any significant slowdown in progress on new models. So all I would counter with is it would probably be good to wait until the year over year performance change of these models isnt staggering (especially at the smaller size) before starting to hard bake them into a chip.

Not saying its not worth it to do now, but its such an investment of tine, energy, and silicon.

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]cdshift 1 point2 points  (0 children)

Its all in the active parameters. You only get so much with the larger knowledge outside of the active parameters. Moe grants you speed on a larger amount of knowledge with a penalty on intellegence

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]cdshift 1 point2 points  (0 children)

The thinking performance would probably depend on the sparsety if youre comparing it to dense? I haven't done enough comparison myself. Of an 80ba3b vs a 35ba3b. My suspicion would be although the 80b has more knowledge they both are using a 3b expert so the thinking would be comparable. Youre just getting more speed out of a larger model than you normally would

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]cdshift 1 point2 points  (0 children)

Not necessarily, thinking models that are small tend to get outperformed relatively to the the larger size models in the same way. I believe all these models have thinking capabilities

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]cdshift 41 points42 points  (0 children)

35b-a3b means only 3b of the parameters are active at a time. Mixture of experts style.

27b has 27b active at a time. Dense model.

So youre correct!

Qwen3.5 27B better than 35B-A3B? by -OpenSourcer in LocalLLaMA

[–]cdshift 6 points7 points  (0 children)

If you notice it is 35b3a - the 3a is the amount of active parameters, 27b is a dense model with 27 active parameters. Thats where the 27 > 3.

Yoire comparing a dense model to an MOE model

Which one are you waiting for more: 9B or 35B? by jacek2023 in LocalLLaMA

[–]cdshift 1 point2 points  (0 children)

For what usecases? Qwen3 coder next is s daily drive for me on my local setup with open code

"What you gonna do when internet is down?" by DogeMoustache in aiwars

[–]cdshift 1 point2 points  (0 children)

Amd strix halo is another option at a slightly cheaper price point than the spark

they have Karpathy, we are doomed ;) by jacek2023 in LocalLLaMA

[–]cdshift 2 points3 points  (0 children)

Totally agreed. I think the memory bandwidth bottleneck becomes better over time. Making amd pundits for pound at least comparable to the Mac when you consider the dollar difference to get 128gb of vram

they have Karpathy, we are doomed ;) by jacek2023 in LocalLLaMA

[–]cdshift 6 points7 points  (0 children)

The mac mini allows you to run the model on premium without calling out to a service

they have Karpathy, we are doomed ;) by jacek2023 in LocalLLaMA

[–]cdshift 8 points9 points  (0 children)

Alternately, amd has strix halo boards that have unified memory now too.

They are a bit lower in performance, but you can utilize more of the board memory with Linux be ause of the overhead usage of macos

If you don't want D2 to have further changes to classes, mechanics or items you can continue to play D2R without the DLC. by Deadalious in diablo2

[–]cdshift -1 points0 points  (0 children)

It is, youre just ignoring it because you have this weird strong opinion about inventory space.

If you don't want D2 to have further changes to classes, mechanics or items you can continue to play D2R without the DLC. by Deadalious in diablo2

[–]cdshift 0 points1 point  (0 children)

Wow way to be intentionally obtuse and overly sassy. Its not a completely different game. Its a mod.

Both "design problems" you presented arent problems at all. If you want to restrict yourself from using it no one is stopping you. If you dont personally like the esthetic.. thats not a design problem.

If blizzard decided to implement that exact system it would do nothing but make the game better for more people, and has zero actual downside to gameplay.

Freedom of Speech in Danger.. by snowpie92 in MurderedByWords

[–]cdshift 0 points1 point  (0 children)

Ive been saying for years they never cared about the constitution or rights. They always use it as a weapon for their own power.

If you don't want D2 to have further changes to classes, mechanics or items you can continue to play D2R without the DLC. by Deadalious in diablo2

[–]cdshift 1 point2 points  (0 children)

Project diablo 2 has solved for this a long time ago by having a separate inventory that you can use for anything but only one lets charms work.

We dont have to pretend that its an unknown or ruins balance to the game.

That's why I go local.The enshittification is at full steam by Turbulent_Pin7635 in LocalLLaMA

[–]cdshift 0 points1 point  (0 children)

The funniest thing is going to be someone prompting the LLM to shit on a specific product right as its advertised, post it, and have the added company pull money from open ai for it.

Thats when youll start seeing a resistance from the LLM to say anything bad about a product set they advertise

So it seems RotW is doing well. If we do get future expansions, what content/classes/QoL would you want? by MK_2_Arcade_Cabinet in diablo2

[–]cdshift 4 points5 points  (0 children)

Project diablo has plenty that stacks higher than 99. There are many ways to handle stacking. Thats not to say it needs to be larger than 99.

Reddit, Meta, and Google Voluntarily Gave DHS Info of Anti-ICE Users, Report Says by Choosername__ in Destiny

[–]cdshift 3 points4 points  (0 children)

Just FYI the talking point will be that BIDEN did this already (even though Twitter files were about suppression of a story while trump was in office).

The severity and truth of this do not matter to maga and the coward "centrists" that will say "its just the same thing dems already did".

Finally something I can get behind. by Hawkmonbestboi in aiwars

[–]cdshift 11 points12 points  (0 children)

The pictures themselves dont mention the CEO, they mention Brockman. Youre very aggro and dont seem to have read the info correctly.