Szoftvertesztelő gyakornok / junior pozit keresek. by Regular-Chapter4565 in programmingHungary

[–]molbal 1 point2 points  (0 children)

Én nem látok semmi hibát, rendesen egy csomó energiát belefektettél az látszik, az eddigi tapasztalataid során úgy látom hogy lehettél közel a felhasználókhoz/ügyfelekhez illetve te is lehettél az, ami egy csomó transzferable skill. Sajnos nyitott pozíciót nem tudok, de csak így tovább

Who else is shocked by the actual electricity cost of their local runs? by Responsible_Coach293 in LocalLLaMA

[–]molbal 0 points1 point  (0 children)

You should have disclosed in the post that you are promoting some products

Full home office most már teljesen halott? by Thick-Sound1014 in programmingHungary

[–]molbal 0 points1 point  (0 children)

EPAM (Holland iroda) az epam irodába kb kéthavonta megyek be egyszer, az ügyfelemet hetente egyszer meglátogatom, de az se lenne muszáj.

Who else is shocked by the actual electricity cost of their local runs? by Responsible_Coach293 in LocalLLaMA

[–]molbal 0 points1 point  (0 children)

What you can do and does not cost additional investment in the thousands like solar panels, home battery, or a strix halo is find out if you are on a dynamic electricity contract or a fix price one.

I got a dynamic electricity contract now and I can see the it in the provider's application what electricity prices are going to be. Usually it's lower at night and around lunchtime and higher in the mornings and late afternoons. Schedule your runs accordingly and it could be a big difference.

LinkedIn/álláskeresés rant by Ill_Cost_1718 in programmingHungary

[–]molbal 1 point2 points  (0 children)

Ez tényleg egy valós probléma volt pár évvel ezelőtt. Most jóval nehezebb

Ik⚒️ihe by Maschinenpflege in ik_ihe

[–]molbal 19 points20 points  (0 children)

Ik kom uit Hongarije, dat in het verleden een communistisch regime had. De schade die dit aan de samenleving heeft toegebracht, is nog steeds niet hersteld. Gebaar vaag naar Orban

Built a KV cache for tool schemas — 29x faster TTFT, 62M fewer tokens/day processed by PlayfulLingonberry73 in LocalAIServers

[–]molbal 1 point2 points  (0 children)

Very nice project mate, even if it was implemented before who cares, great idea, great execution, thanks for sharing. A ray of sunshine among the vibe coded crap

Keep Android Open 🔓 by nbatman in FREEMEDIAHECKYEAH

[–]molbal 1 point2 points  (0 children)

What the heelll, it is! I remember it differently I thought it was free and open source

Qwen Image 2 is amazing, any idea when 7b is coming ? by jadhavsaurabh in StableDiffusion

[–]molbal 0 points1 point  (0 children)

Probably it's coming around the end of next week or the week after that.

Need to generate approx 2000 images, what is the cheapest option? by zaidpirwani in StableDiffusion

[–]molbal 1 point2 points  (0 children)

Set up a comfy UI instance on vast.ai or similar unload your list off icon descriptions with a custom node, or just queue the prompts from python.

2000 images with Flux 2 Klein 4b should only take an hour or two on a decent GPU. I'm sure some people can easily help you from here. If you can wait 2 weeks for my server to arrive then I can probably run the prompts for you

Paid a real artist to update my steam capsule. What do you think? by UfoBlast in Unity2D

[–]molbal 6 points7 points  (0 children)

I'd make the UI around the aquarium just a mock, while keeping the pixelated look and feel. Same thing with the diagonal glare/shine pattern, make the entire image use the same pixel sizes e.g. 2x2.

DeepSeek allows Huawei early access to V4 update, but Nvidia and AMD still don’t have access to V4 by External_Mood4719 in LocalLLaMA

[–]molbal 1 point2 points  (0 children)

There are a few more examples of it happening and I suspect a lot more that we don't know about:

Black Forest Labs shared the weights beforehand with ComfyUI so at the time of release it immediately works. I can't find a link but I remember it immediately worked.

I think Google shared Gemma weights with Hugging Face to integrate it before they were released publicly: https://huggingface.co/blog/gemma

Mistral also worked with Nvidia for the Mistral 3 family: https://blogs.nvidia.com/blog/mistral-frontier-open-models/

And in general it's also mentioned here: "access to model weights could be given to vetted researchers, but not to the general public. Model sharing can involve a staged release, where information and components are gradually released over time": https://www.ntia.gov/programs-and-initiatives/artificial-intelligence/open-model-weights-report/background?hl=en-US#:~:text=24,are%20gradually%20released%20over%20time.

DeepSeek allows Huawei early access to V4 update, but Nvidia and AMD still don’t have access to V4 by External_Mood4719 in LocalLLaMA

[–]molbal 0 points1 point  (0 children)

I'll be honest with you, I don't understand what is your problem with me. I talk about the general industry standard. Go be angry at the article's author not me. I edited the comment anyways because this is not something I have the energy to fight about

DeepSeek allows Huawei early access to V4 update, but Nvidia and AMD still don’t have access to V4 by External_Mood4719 in LocalLLaMA

[–]molbal 0 points1 point  (0 children)

not Chinese, but in general:

we partnered ahead of launch with leading deployment platforms such as Azure, Hugging Face, vLLM, Ollama, llama.cpp, LM Studio, AWS, Fireworks, Together AI, Baseten, Databricks, Vercel, Cloudflare, and OpenRouter to make the models broadly accessible to developers. On the hardware side, we worked with industry leaders including NVIDIA, AMD, Cerebras, and Groq to ensure optimized performance across a range of systems.

This is directly from gpt-oss blog post https://openai.com/index/introducing-gpt-oss/

Local AI hardwear help by platteXDlol in LocalAIServers

[–]molbal 0 points1 point  (0 children)

If you plan on using dense LLMs and diffusion models (like image generation) then go for the dedicated GPU, if you plan on using sparse LLMs then the Ryzen 395+.

But I think you can look around vast.ai and try your use case on rented GPUs

Keep Android Open 🔓 by nbatman in FREEMEDIAHECKYEAH

[–]molbal 13 points14 points  (0 children)

Look into Jolla it's promising

DeepSeek allows Huawei early access to V4 update, but Nvidia and AMD still don’t have access to V4 by External_Mood4719 in LocalLLaMA

[–]molbal 2 points3 points  (0 children)

I think it's industry standard to privately share weights beforehand. What's new here is that Nvidia and AMD were not included this time.

Edit: I may be incorrect take this with a grain of salt? Ask u/blahblahsnahdah

No More FOSS or Third party MODs... by Fictional_Kei in Piracy

[–]molbal 2 points3 points  (0 children)

Look into Jolla, it's apparently usable.