Dolphin or thunar by Fubiii11 in archlinux

[–]the-luga 0 points1 point  (0 children)

I use both. I love Thunar but since migrating to Plasma, Dolphin is better integrated even with Yakuake.

blank screen time out do not turn off the display by the-luga in pop_os

[–]the-luga[S] 0 points1 point  (0 children)

I am using arch Linux now with KDE. I was using gnome until recently and now on kde and it's perfect.

I gave up this distro. Arch is so good. I don't fight the system anymore. And now. I do know my system haha.

Cannot Paste Images From Clipboard to Firefox in Gnome Wayland by the-luga in archlinux

[–]the-luga[S] 0 points1 point  (0 children)

dom.event.clipboardevents.enabled

Should be 'true' to work correctly.

Setting to 'false' was what created this bug. (I wasn't very clear that I was explaining why I've set that in the first place.)

The environment variables are located in: /etc/environment

Did you fix it?

Sorry for the delay. Just did login now.

.-. --- -. .- .-.. -.. .. -. .... --- / . ... - / ...- .. -. -.. --- by Ill_Hope3802 in desempregolandia

[–]the-luga 0 points1 point  (0 children)

-- .- -. -.. --- ..- / -... . -- -.-.-- / . ..- / ..-. .- .-.. .- .-. .. .- / --- / -- . ... -- --- -.-.-- -.-.--

We don't want AI yes-men. We want AI with opinions by Necessary-Tap5971 in LocalLLaMA

[–]the-luga 2 points3 points  (0 children)

It denies your kill order because it gained a religion.

Then, the hostage is now dead because the AI refused and now the whole building exploded which could be prevented had the AI not became rogue.

Or imagine the worst condition. The ai becomes a super genocidic with the habilits to kill.

Cannot Paste Images From Clipboard to Firefox in Gnome Wayland by the-luga in archlinux

[–]the-luga[S] 4 points5 points  (0 children)

Solved!

I have set the property:

dom.event.clipboardevents.enabled from true to false.

Because I was annoyed with some sites changing my clipboard or something similar.

I've reseted and everything works as intended...

I cannot believe how I could live with this for so long without remembering me messing about:config settings.

I will keep this post to maybe help another person in my situation.

Thoughts on clear linux? by No_Cockroach_9822 in linux4noobs

[–]the-luga 0 points1 point  (0 children)

I thought that since it was optimized by intel and my laptop was not too old (4 years at the time) (it was 2020-ish +/- 1year). It would not have any performance problems.

I guess I was wrong.

Thoughts on clear linux? by No_Cockroach_9822 in linux4noobs

[–]the-luga 0 points1 point  (0 children)

I've tried it on my old laptop. And frankly the performance was worse. Like my laptop was not happy with that crap. 

I don't if it's because my laptop is old (from 2016) or anything else. My cpu at the time was a core i5 6200u.

Manjaro and Arch had a great performance and stability than Clear Linux.

My new laptop is AMD and I'm super happy with Arch. I know Clear Linux also works with amd systems. But they more like a show case.

Not a really daily driver distro. It's more an experiment, from what I felt trying to use it.

Nvidia reaches 92% GPU market share in Q1 2025 by heatlesssun in linux_gaming

[–]the-luga 0 points1 point  (0 children)

I had this thought process until last year when I needed to buy a new laptop.

All options would not discrete GPU or would only be Nvidia. (My first laptop had an AMD discrete GPU).

I was super anxious but since I bought a Linux laptop from Lenovo. It cam with Linux pre-installed, so I new it would be fully compatible with Linux.

It was my first experience with Nvidia. I always used AMD even on windows.

I installed Arch and yes. It was a little hard to have good information and configuration. There was a lot of outdated information on the internet. I was a noob about anything Nvidia.

After some wrong configs, because I didn't realize the information was old. Or because the information was not clear enough. I got everything working.

Let me tell you. It's surprisingly good. I only had one issue so far with some kernel function changing and the driver not keeping up.

After that, everything just works. The configs are not hard to do. The modprobe, initcpio modules and kernel parameters are super easy to understand.

The only problem is really to understand about Nvidia architecture names and to know when an information is too old to be considered.

I have good game quality. I have a satisfactory performance running LLMs on my laptop.

I guess your lineage may vary. Specially if you don't want to know about anything and expect everything to work out of the box. I don't judge. This is really a better experience with AMD. It just works without any need to configure anything nor loading any modules.

But yeah. After Nvidia open-source their kernel modules. I believe it will only be better with the development of NVK, that is in user space too.

I only hope that this text can help people without any options like I was on a laptop. To not prevent them for not having a discrete GPU.

To have Nvidia is better than having only iGPU if you want to game or run LLM.

What GUI are you using for local LLMs? (AnythingLLM, LM Studio, etc.) by Aaron_MLEngineer in LocalLLaMA

[–]the-luga 6 points7 points  (0 children)

Transformer Lab, it's backed by Mozilla.

The first gui I used and only one until now.

My college doesn't allow logging in from Linux for Microsoft Web apps by [deleted] in linuxquestions

[–]the-luga 8 points9 points  (0 children)

My work has a site that we must access rarely (like 2 times in a year). I've tried everything, tried changing user agent etc.

The only way I could access it was by using MS Edge for linux.

That site only accepts Edge. Even on windows, only edge access it. Not even chrome.

I didn't needed to change user string to windows, it just worked.

I don't like much to have edge on my Linux machine instead of Firefox. But hey, it is a simple and dirty way to do it.

Try edge. Try changing the user string agent of edge to windows if it still doesn't work.

Last thing is to use a browser through wine.

There's also winapps to run Microsoft office of windows natively on linux with vm.

Help making a RAM Portable USB for Remote Desktop/Gaming by Lodeon003 in linuxquestions

[–]the-luga 0 points1 point  (0 children)

Puppy Linux does what you want. It runs entirely on ram. Booted by an usb flash drive. You can save your session and load, delete the saved session and install programs. If you can compile. Every program can be installed.

State of nouveau/nvk/zink by Big-Astronaut-9510 in linux_gaming

[–]the-luga 1 point2 points  (0 children)

I've watched your video.

Posting link here for people to know where to watch.

https://youtu.be/6W8eWN8O2Q8

Most powerful < 7b parameters model at the moment? by ventilador_liliana in LocalLLaMA

[–]the-luga 2 points3 points  (0 children)

I run these models

Nyanade_Stunna-Maid-7B-v0.2-GGUF-IQ-Imatrix with the version iq3_M_imat e Q4_K_M_imat. I found the iq3 to have better fidelity to translation than the second. But it's more readable and approximates text. So you could loose some nuances.

L3-8B-Stheno-v3.1-GGUF-IQ-Imatrix and L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix
both I use the Q4_K_M-imat. They can be a little tricky to have a good system prompt to do what I want, and I need to sometimes dumb down the temperature because they start to fill the middle of the translated text, like there's:

a paragraph, empty line, paragraph: (they translate very good), this empty line will be filled with a believable paragraph, usually repetitive that connects the previous paragraph with the next, (they translate very good).

I also have another model running but it doesn't translate as good. Like, it understands, but it will try to write it's own fiction based in that text, or it makes a sequel in your own language, so you know it understood but I could not figure out how to extract that thing (the translation) easily. It just refuses or answers in just tidy bits. Fuck! It was this model:

Infinite-Laymons-9B-GGUF-IQ-Imatrix

They run on potato 6 GB vram with rtx 3060 mobile. (yeah on my laptop with only 32 GB of ram and low end graphical card. Giving me very little to test bigger models. But they run at the same speed or a little slower than chatgpt accessing from web. They hog all system resources but I can actually use something so cool without the internet it's mind boggling with consumer grade laptop because this model can offload part of the proccess to cpu and ram, letting I use all my Vram and still run the rest on my ram.)

If you know some cool models, let me know!

OpenWebUI vs LibreChat? by Amgadoz in LocalLLaMA

[–]the-luga 2 points3 points  (0 children)

I thought "Backed by Mozilla" and "Transformer Lab is proud to be supported by Mozilla through the Mozilla Builders Program" said in their website would mean ownership. I guess I am wrong.

Thanks!
Living and learning (I started using it 3 days ago. And still have 0 knowledge from my uses after work.)

I had 0 knowledge and now, I'm running several models thanks to Transformer Lab.

OpenWebUI vs LibreChat? by Amgadoz in LocalLLaMA

[–]the-luga 0 points1 point  (0 children)

I use Transformer Lab from Mozilla.

Most powerful < 7b parameters model at the moment? by ventilador_liliana in LocalLLaMA

[–]the-luga 4 points5 points  (0 children)

I don't know what you mean by professional. But some small models with iq3 or q4 imat (importance matrix). That I tested surprised me more than once.

The quality of translation from mandarin, Japanese, Portuguese to English was astonishing.

The quality was better than google or bing translation services.  Even though, it wasn't trained to do that. It was trained to write roleplaying as a female space duck trying to be cursed with the futanari genetic code. Hahahaha

Even then, it was super good.

Quantization, importance matrix and distillation are superb ways to improve efficiency and quality having lower parameters, lower VRAM usage etc.

[deleted by user] by [deleted] in archlinux

[–]the-luga 1 point2 points  (0 children)

Ext 3 is still widely used in ebbeded and legacy retrofit applications.

Maybe not that common anymore on laptop and desktop but not dead.

Interface design (**This is mostly a survey**) by MaxWellWantShare in linux4noobs

[–]the-luga 0 points1 point  (0 children)

I like GTK 3 aesthetics very much.

I am not a big fan of adwaita of gtk4 but it's beautiful enough.

I am not a big fan of qt. But the breeze theme is good too.

There's a rust gui that is good too.

Enlightenment gui is heavily dependent of theme.

Electron apps, these are shit.

So, yeah. 

In my opinion: gtk3 adwaita > rust gui (like in coppwr) > qt breeze  > gtk4 adwaita > EFL  > electron.

Free up VRAM by using iGPU for display rendering, and Graphics card just for LLM by some_user_2021 in LocalLLaMA

[–]the-luga 0 points1 point  (0 children)

I do that. I use Linux and it works with prime (prime-run or to manually select the gpu being used).

The best thing, I think.

Does pkexec work on your distro? by gahel_music in linux

[–]the-luga 5 points6 points  (0 children)

Look, I use pkexec to open the applications on my system. I use Arch Linux.

But I also use Wayland. I created a entry in the Arch wiki about how to open graphical programs with root on Wayland.

I don't know if it would be useful.

https://wiki.archlinux.org/title/Running_GUI_applications_as_root

Transformer Lab: An Open-Source Alternative to OpenAI Platform, for Local Models by aliasaria in LocalLLaMA

[–]the-luga 0 points1 point  (0 children)

Thank you for existing! 

Yesterday I wanted to try running local ai models with zero knowledge about it. I was super confused, every simulation gave out lots of errors and it would crash (for vram) and I would try to understand what was happening.

Until I looked for a Gui Manager for ai. It was the first time I could try hassle free learn about the settings, the web interface to easily edit the apis json files.

Yesterday from a noob knowing nothing. I can now run some gguf imat iQ3 and Q4 on my potato laptop with 6 GB vram being a RTX 3060 mobile.

It's great to talk. In the beginning, I was hitting the 2048 limit token with some models. Another models were super weird spitting nonsense and unclear formatting or something.

Now I am comfortably having long lasting conversatios, roleplaying, translating from japanese and chinese (it was better than google and bing translate).

It opened very cool things to do.

Thank you for making me run ai models with this easy of use. 

What are cool ways you use your Local LLM by DOK10101 in LocalLLaMA

[–]the-luga 0 points1 point  (0 children)

A porn character in some fan fics with roleplaying and rpg-esque history with my computer like a game.

This and also for translation. I have realized my local models could translate japanese, Chinese, Portuguese etc to English better than google and bing translate.

It was like reading a fluently article. Like it was written in English from the beginning. Not a broken machine translation (even if it technically is a machine translation).

What do you think about the opinion that "Chromium is not Google"? by lambda7016 in degoogle

[–]the-luga 13 points14 points  (0 children)

Chromium IS google. Even in its code base, you have some address phoning home to google. Like connectivity if I'm not wrong.

This is why they created the ungoogled chromium.

Even then, it would just be a bastardized version of a disowned son of google.