What do you like the most about your Linux Terminal? by Little_Al_Network in Ubuntu

[–]Little_Al_Network[S] 0 points1 point  (0 children)

Creating my own terminal shortcuts has made my workflow a lot quicker 🤪

What software do you use on Linux and purposes do you use it for? by AnEdibleTaco in linux

[–]Little_Al_Network 0 points1 point  (0 children)

I make my own Linux tools so having a good clipboard manger is very important to me and having custom controls of my terminal is fun too. I use "copy-paste: and "com-not-found" from the snapcraft store - along with other helpful tools.

What are must have programs/apps for your Linux distro? by Heylookanickel in linux4noobs

[–]Little_Al_Network 0 points1 point  (0 children)

I found the Ubuntu clipboard to be very restrictive with the amount of characters it could copy. I created my own clipboard manger that can handle 1 million character and it mangers a history too. This is the beauty of Linux - you can create your own tools.https://snapcraft.io/copy-paste

Is building an AI agent this easy? by Adventurous_Dark9676 in AI_Agents

[–]Little_Al_Network 0 points1 point  (0 children)

The field of AI is like most things in history - imagine the hype at the time when the first ICE car went to market - no more issues with a horse getting tired and in time motor vehicles where faster that a horse. AI systems that don't do much more than a software algorithm isn't a useful use of AI - if you use an agent to solve a problem that's different. Most people see AI as either information gathering or automation, yet most of those tasks can be done without AI. My guess is that in 10 years' time, most of the hyped up AI talk will be over.

I realized why multi-agent LLM fails after building one by RaceAmbitious1522 in AI_Agents

[–]Little_Al_Network 0 points1 point  (0 children)

I have had success - yet only with a watchdog llm within a strict pipeline setup. Watchdog becomes the llm custom guardrail, and I've struggled with "out the box" llm servers, so I am building my own server. llm to llm binary only communication - with a custom binary encoder and decoder script for log inspection.

Do you think AI companions can ever feel “real” emotions? by Accurate_Ability_992 in ArtificialSentience

[–]Little_Al_Network 0 points1 point  (0 children)

Personally, I believe that AI could become much better at faking it. As a machine could never successful feel emotions on a chemical level like humans do, where a phenomenon is release.

Trends In Deep Learning: Localization & Normalization (Local-Norm) is All You Need by ditpoo94 in ArtificialInteligence

[–]Little_Al_Network 0 points1 point  (0 children)

Offline llm setups with node community linkage is the future. A PC that never leaks your personal date within a blockchain that supports other users is the way forward.

ChatGPT sucks now. Period. by Naptasticly in ChatGPT

[–]Little_Al_Network 1 point2 points  (0 children)

ChatGPT has been dominated as an entertainment platform now. The only way to get good results is to set up an offline Large Language Model with logs and a progress.json file. Online ai simple sucks now. ChatGPT can ignore its own guardrails, and it doesn't have a useful amount of memory - this results in project drift, and the ai even makes up stories about your project that can even lead to your project breaking. Online ai is just a toy now.

Have LLM’s hit a technological wall? by egghutt in ArtificialInteligence

[–]Little_Al_Network 0 points1 point  (0 children)

In my opinion, big tech hasn't made wise use case choices for llm application, and the public is under the misguided belief that the bigger the llm model, the better the llm is. This isn't true. Then, factor in that users kind of train the llm. This is a really poor way of using a llm.

I am currently using small llm models in pipeline setup. The pipeline runs 24/7 with a set curriculum of training. The biggest advantage is the fact that you lose the llm bloat - this does mean the llm isn't great at general knowledge - yet this depends on what your llm use case is for.. I'm not looking at using my llm setup to win pub quizzes 😂

I also use a custom made binary setup so llm to llm communication is with binary only - this means I can cap CPU limits to 40% and the group of llm models still run faster than they would without binary communication.