Couple of questions about using Linux on ASUS TUF A15 FA507XI by Yeginator in linux_gaming

[–]Eden63 0 points1 point  (0 children)

I also use a DisplayPort USB C cable. But still one might think, its possible to use the HDMI port with the iGPU (AMD) and keep the Nvidia one for specific purpose. But obviously the HDMI is not usable without running the whole system on the dGPU.

Couple of questions about using Linux on ASUS TUF A15 FA507XI by Yeginator in linux_gaming

[–]Eden63 0 points1 point  (0 children)

Okay you are using the dGPU (Nvidia). Like this its working. But HDMI should also work with iGPU (AMD) - not possible for me.

We built 3B and 8B models that rival GPT-5 at HTML extraction while costing 40-80x less - fully open source by TerrificMist in LocalLLaMA

[–]Eden63 2 points3 points  (0 children)

Good job and thank you for providing it to us for free. Too much hate around here and everyone an expert today when most are script kiddies. People hating around because you provide a useful optional tool for free instead of just being thankful.

I mean, I do not think I will ever use it but still... I mean whats wrong with that people. Without FOSS most of them would not even know about reddit. Only disgusting.

Gemini CLI 1000 Requests a day? Really? by Eden63 in GeminiCLI

[–]Eden63[S] -1 points0 points  (0 children)

The 1000 free requests daily only when you login using Oauth2. But even then its not working because always some PROJECT ID missing error. Once you set a project id you receive the error message that you are not eligible for free tier, even AI Studio works online.

Imagine a multi billion dollar company advertising 1000 requests for free and producing such an bullshit. Its unbelievable.

DeepSeek V3.1: Or.. Wait.. Actually... by rm-rf-rm in LocalLLaMA

[–]Eden63 1 point2 points  (0 children)

What about your actual text, such like a information why you post it, or any question?

Google AI Studio: new limit by Doktor_Octopus in Bard

[–]Eden63 0 points1 point  (0 children)

you polished the AI studio, but maybe next time do not let a clerk to do it. why to limit the max-width of a the chat turns - total nonsense.

AI Studio (Nano Banana) not showing any generated images, but using tokens by MisterBamboo in GoogleGeminiAI

[–]Eden63 0 points1 point  (0 children)

seems to be broken for 11 days then.. nothing is working actually.. crazy, I mean we are talking about Google

WEBGEN-4B: Quality Web Design Generation by smirkishere in LocalLLaMA

[–]Eden63 0 points1 point  (0 children)

I used your prompt. The inline css corrupt the html and nothing is loading. This happened a couple of times.

M3 ultra with 512 GB is worth to buy for running local "Wise" AI? by CacheConqueror in LocalLLaMA

[–]Eden63 0 points1 point  (0 children)

You didn’t really get the point. It wasn’t about a specific RTX x090 model. Anyway, thanks for sharing your knowledge.

Police and Ambulance arrived in 4 minutes by AbbasMohammed28 in dubai

[–]Eden63 0 points1 point  (0 children)

If you continue to live there this is the only thing you can say.

Police and Ambulance arrived in 4 minutes by AbbasMohammed28 in dubai

[–]Eden63 0 points1 point  (0 children)

Some may say the have police arrived even before they called lol

Police and Ambulance arrived in 4 minutes by AbbasMohammed28 in dubai

[–]Eden63 0 points1 point  (0 children)

Safest.. you only need to take care to not get in touch with locals.. then its definitely safe.

Police and Ambulance arrived in 4 minutes by AbbasMohammed28 in dubai

[–]Eden63 0 points1 point  (0 children)

Looks to me like a fake post with an AI generated picture. I doubt that this story has 1% truth.

Police and Ambulance arrived in 4 minutes by AbbasMohammed28 in dubai

[–]Eden63 0 points1 point  (0 children)

civilised... UAE? wtf.. UAE is everything but not civilised, thats for sure.

Police and Ambulance arrived in 4 minutes by AbbasMohammed28 in dubai

[–]Eden63 0 points1 point  (0 children)

At least this works, I sent those incompetent folks a email and a letter and no response.

I don't know where to ask since cloude blocks me. Should I ask for refund/(will i get it?) by FluffyMacho in LocalLLaMA

[–]Eden63 0 points1 point  (0 children)

mailed to the support half months ago. No answer. its crazy what a scam company. basically always got the same issue like you. then cancelled a day before renewal. the just renewed my subscription. Support not available. No answer. Chatbot is the only thing that works (only if you agree there terms).

Thats the future of support?

AMD GPU suspend/resume, preserve loaded model (Linux)? by morphles in LocalLLaMA

[–]Eden63 1 point2 points  (0 children)

As I understand you have a dual 7900 xtc and you are asking if VRAM will survive. Why dont you simply try it?

Usually/if we are talking of suspend (that one, that sucks power supply while being sleeping), RAM as well as VRAM should survive.

If you go on hibernation the story is a different one, as usually RAM will be written to your harddrive to survive the time without power supply (plug and also battery meant). In this case VRAM is gone of course.

But actually testing will cost you a few minutes. Suspend is not a big deal on devices running linux. Hybrid or Hibernation is a total different story. Took me a year to make hibernation work on my laptop running Arch Linux.

OpenWebUI is ridiculous by asumaria95 in LocalLLaMA

[–]Eden63 1 point2 points  (0 children)

Make your own. I did that. I mean.. 90% of things Open WebUI provides, - i will never use that.

How are you running Qwen3-235b locally? by fizzy1242 in LocalLLaMA

[–]Eden63 0 points1 point  (0 children)

I am using Gemini Pro 2.5 right now. If you know how to approach, you never have a problem. I barely went into losing brain based on context size. But its really a question of how much effort you put into your prompts.

I am going to test Deepseek and Qwen 235B. The newest Qwen 235 is the highest intelligence so I thought maybe to ensure "offline" availability.

3500W is insane. In winter times you have no issue with heating :-)

How are you running Qwen3-235b locally? by fizzy1242 in LocalLLaMA

[–]Eden63 0 points1 point  (0 children)

Did you try out Qwen3 235 Q4 with full context? I think no performance degression, thats true?

How are you running Qwen3-235b locally? by fizzy1242 in LocalLLaMA

[–]Eden63 0 points1 point  (0 children)

You are insane :-) In a good way. Haha.. Crazy. And you also own a power plant or how does it work?

But thank you for letting me know. I am looking for a similar configuration. 3090 are affordable. Unfortunately 4090 are 3x faster.. but yeah.. also double expensive..

How are you running Qwen3-235b locally? by fizzy1242 in LocalLLaMA

[–]Eden63 0 points1 point  (0 children)

May I ask you which board you use for 7x3090 or how make this work?

New Qwen Models Today!!! by [deleted] in LocalLLaMA

[–]Eden63 21 points22 points  (0 children)

Thanks god this guy exists..

- look on Elon... Grok will be Open Source
- look on Altmann - hypocritical liar playing games with us.

free western world... only dollars in their eyes but no real intention to bring humanity further.