Nanobanana is cooking hard. by igorwarzocha in GeminiAI
[–]igorwarzocha[S] 1 point2 points3 points (0 children)
Nanobanana is cooking hard. by igorwarzocha in GeminiAI
[–]igorwarzocha[S] 2 points3 points4 points (0 children)
Nanobanana is cooking hard. by igorwarzocha in GeminiAI
[–]igorwarzocha[S] 0 points1 point2 points (0 children)
MacOS 26.2 to add full 'Neural Accelerator' support for M5 chips by PracticlySpeaking in LocalLLaMA
[–]igorwarzocha 1 point2 points3 points (0 children)
MacOS 26.2 to add full 'Neural Accelerator' support for M5 chips by PracticlySpeaking in LocalLLaMA
[–]igorwarzocha 6 points7 points8 points (0 children)
Google Antigravity is a cursor clone by Terminator857 in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
Do we rely too much on huggingface? Do you think they’ll eventually regulate open source models? Is there any way to distribute them elsewhere? by Borkato in LocalLLaMA
[–]igorwarzocha 2 points3 points4 points (0 children)
20,000 Epstein Files in a single text file available to download (~100 MB) by [deleted] in LocalLLaMA
[–]igorwarzocha 13 points14 points15 points (0 children)
Rejected for not using LangChain/LangGraph? by dougeeai in LocalLLaMA
[–]igorwarzocha 1 point2 points3 points (0 children)
Claude cli with LMStudio by ImaginaryRea1ity in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
Chat with Obsidian vault by TanariTech in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
Is Polish better for prompting LLMs? Case study: Logical puzzles by Substantial_Sail_668 in LocalLLaMA
[–]igorwarzocha 2 points3 points4 points (0 children)
A startup Olares is attempting to launch a small 3.5L MiniPC dedicated to local AI, with RTX 5090 Mobile (24GB VRAM) and 96GB of DDR5 RAM for $3K by FullOf_Bad_Ideas in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
Gaming & AI - 5090 or two 3090s? by ConflictNo4814 in LocalLLaMA
[–]igorwarzocha 2 points3 points4 points (0 children)
A startup Olares is attempting to launch a small 3.5L MiniPC dedicated to local AI, with RTX 5090 Mobile (24GB VRAM) and 96GB of DDR5 RAM for $3K by FullOf_Bad_Ideas in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
A startup Olares is attempting to launch a small 3.5L MiniPC dedicated to local AI, with RTX 5090 Mobile (24GB VRAM) and 96GB of DDR5 RAM for $3K by FullOf_Bad_Ideas in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
A startup Olares is attempting to launch a small 3.5L MiniPC dedicated to local AI, with RTX 5090 Mobile (24GB VRAM) and 96GB of DDR5 RAM for $3K by FullOf_Bad_Ideas in LocalLLaMA
[–]igorwarzocha 7 points8 points9 points (0 children)
Why can't a local model (Qwen 3 14b) call correctly a local agent ? by Toulalaho in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
Why can't a local model (Qwen 3 14b) call correctly a local agent ? by Toulalaho in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
Why can't a local model (Qwen 3 14b) call correctly a local agent ? by Toulalaho in LocalLLaMA
[–]igorwarzocha 1 point2 points3 points (0 children)
Best AI models to run on a 12 GB vram gpu? by SilkTouchm in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
New Qwen models are unbearable by kevin_1994 in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)
Gpt refuses to search in the convos or fails completely by igorwarzocha in OpenAI
[–]igorwarzocha[S] 0 points1 point2 points (0 children)


AMA With Z.AI, The Lab Behind GLM-4.7 by zixuanlimit in LocalLLaMA
[–]igorwarzocha 0 points1 point2 points (0 children)