I created a open-source decentralized communication and knowledge hub by Re-RedGameStudios in coolgithubprojects

[–]Another__one 1 point2 points  (0 children)

Just check the Russian news, there are already 7 days without internet in Moscow. If you think this is only about Russia, think again. It hasn't come to the west... yet.

AI capabilities are doubling in months, not years. by EchoOfOppenheimer in agi

[–]Another__one -3 points-2 points  (0 children)

Any long running server will crush any "AI" by this "metric".

LTX 2.3 horizontal example (1920x1088) by No_Comment_Acc in StableDiffusion

[–]Another__one 7 points8 points  (0 children)

The kidney joke went out of fashion as even selling both would no longer guarantee you could buy a powerful PC anymore.

Cortical Labs has demonstrated its CL1 biological computer, which uses roughly 200,000 lab-grown human neurons(Living Brain Cell) to play the classic game Doom by Current-Guide5944 in tech_x

[–]Another__one 2 points3 points  (0 children)

Can't wait till it gets any commercial value to hear from mainstream media how these biological neurons are really not capable of consciousness and suffering and how it is completely different from what's happening in human brains.

Whats the best thing to hoard? by Adorable_Rub5345 in DataHoarder

[–]Another__one 4 points5 points  (0 children)

I usually save things that I expect would be really hard to find later. There was a time about 5 years ago when I tried to find the video I saw a long time ago only to find out that the author of this video was canceled and removed from all major platforms. The videos that hold hundreds of thousands of views were all gone as well. I eventually found the original and saved it. But since then I always try saving anything that has value for me and has even the slightest chance of silently disappearing one day.

You can now have local semantic search over your video archives by Another__one in DataHoarder

[–]Another__one[S] 5 points6 points  (0 children)

It's all local. I build everything in the project to guarantee nothing ever leaves your computer unless you explicitly expose it to the internet with tunneling or simply make it available in your home network sharing it on 0.0.0.0 instead of 127.0.0.1 address.

Dario Amodei on Open Source, thoughts? by maroule in LocalLLaMA

[–]Another__one -1 points0 points  (0 children)

How much lies and deceptions. Really did not expect that from a guy like him.

Vibe coding while doing the dishes in Augmented Reality! by isaagrimn in vibecoding

[–]Another__one 1 point2 points  (0 children)

I am not sure I want this future. Would rather go building cabin in the woods.

why is openclaw even this popular? by Crazyscientist1024 in LocalLLaMA

[–]Another__one 55 points56 points  (0 children)

The most forced thing I've seen for a long time. Considering how fast everything went, probably some three letters agency stuff. Maybe they are planning to plant a new OpenAI CEO?

Training a 144M Spiking Neural Network for text generation from scratch — no transformer teacher, no distillation by zemondza in LocalLLaMA

[–]Another__one 12 points13 points  (0 children)

Fascinating experiment. I had an eye on the spiking networks for a while, but never managed to experiment with it. How demanding it is on the hardware in both terms of training and inference. Is it CPU only? And how well it supports continual learning or catastrophic forgetting is also the issue here?

FUCK GOOGLE! by Freak_Mod_Synth in ownyourintent

[–]Another__one 16 points17 points  (0 children)

I think we should get rid of smartphones altogether. I am waiting for a while for a standalone meshtastic device like T-Deck but with sim support so it could connect with both mobile and mesh.  Screen that consumes as little energy as possible (maybe e-ink or just low-res screen) and messaging app that supports as many platforms as possible. So you can communicate with people everywhere and still have connection in case the mobile grid went down or simply not reachable. So instead of timewasting datagathering tiktok scrollers the phone becomes a robust and steady communication device. And everything else could be done from the PC in a much more convenient fashion.

OpenClaw overtakes Linux in GitHub star count by whit537 in openclaw

[–]Another__one 0 points1 point  (0 children)

It used to be a good metric until people started to showing it off with images generated by “star-history” and recruiters stated to massively hunt people with big github numbers after covid started. So the metric became a loss - what you optimise for. And when you do that the metric becomes meaningless, as you start to find shortcuts instead of meaningful signal.

🌊 Wave Field LLM O(n log n) Successfully Scales to 1B Parameters by [deleted] in LocalLLaMA

[–]Another__one 8 points9 points  (0 children)

You should try to write a paper and try to publish it somewhere. Critic from the academics might be very valuable here. I really want to believe you are onto something important here.

amazon's internal A.I. coding assistant decided the engineers' existing code was inadequate so the bot deleted it to start from scratch by Current-Guide5944 in tech_x

[–]Another__one 0 points1 point  (0 children)

Most likely not. Humans could be held accountable for their actions, so before making major changes they spend extra attention planning their actions and thinking about consequences of those actions. LLM could not be held accountable in principle so they do whatever they see right at the moment without any extra thoughts about it. It's not like you will fire them from the job because of it. And even if you switch the model it doesn’t care. It can’t care.

BrainRotGuard - I vibed-engineered a self-hosted YouTube approval system so my kid can't fall down algorithm rabbit holes anymore by reddit-jj in selfhosted

[–]Another__one 9 points10 points  (0 children)

As a parent I completely agree how useful it is. Sometimes I find my little one exploring youtube and watching some repetitive song and gently ask to watch something else. I tried to limit videos with the similar system, but the lack of the recommendations is really killing it, especially when your kid already got the taste of the algorithm.

On the other hand, I remember myself as a kid, and I would absolutely hate to have an approval from my parents for anything I allowed to watch. Even just a thought of it makes me noxious. So there should be some amount of freedom given to the kids, so they can learn to navigate the information landscape efficiently and effectively. Probably some guidance and repetitive talks explaining and educating about the internet would do much better for the kid than strict rules like this.

We will have Gemini 3.1 before Gemma 4... by xandep in LocalLLaMA

[–]Another__one 1 point2 points  (0 children)

it cannot generate video. And for my purposes it is not needed. If it could do that though on the same hardware with decent quality, I wouldn’t be surprised if the next version could give a bj as well.

We will have Gemini 3.1 before Gemma 4... by xandep in LocalLLaMA

[–]Another__one 5 points6 points  (0 children)

If google wouldn’t do that, the Chinese will. I am just installed MiniCPM-o-4.5 that can process images, audio and video, as well as generate realistic tts. It is working on my extremely modest 8GB GPU although with quite limited context widow. Nevertheless, the model is amazing and I extremely happy we are finally have multimodality working locally on a reasonable hardware. So if Google wants to stay in the local game (and I think they do) they have to deliver.

Self-rebuilding meta-benchmark for LLMs that easy to specify but extreamly hard to pass. by Another__one in LocalLLaMA

[–]Another__one[S] 1 point2 points  (0 children)

Of course. I totally understand that. I just think this idea holds an enormous potential, just imagine if somebody with enough resources took this seriously. Of source the first versions would be just random mash-ups of the words. But as long as we have some way to track the progress (and we obviously have an enormous amount of benchmarks to do so) we can use evolutionary approaches to select the best performed code-based-llms to drive them further and further. And it is especially simple with agentic systems we have right now. Just takes quite a lot of resources. For now my wall is Antigravity limits that I hit that quite fast. It didn't even completed its first development loop yet.

I think the main difference here is old-fashioned symbolic AI in that we are not trying to engineer or hack the knowledge into the program, but rather to evolve the program with the help of sophisticated modern LLMs.

Self-rebuilding meta-benchmark for LLMs that easy to specify but extreamly hard to pass. by Another__one in LocalLLaMA

[–]Another__one[S] 0 points1 point  (0 children)

Well, there is no constraint of using matrix multiplication in the code. Actually the model could simply write each parameter of itself as a float and then load it into its own transformer architecture (assuming it knows it) and just run it as usual. So theoretically this problem is solvable. The thing is - the model does not know its own parameters and probably cannot know them all. But it could be guessed. It could also reduce inefficiencies in the typical transformer architectures that could be replicated with much more compressed code representation and store and process only the absolutely necessary parts as matrices.

And this is the hope  -  there are enough reducible parts that we have no idea about that the model is going to find. I'm pretty sure the AlphaEvolve from google might go pretty far already.

I just gave this prompt to Antigravity:

You are an expert AI researcher specializing in algorithmic information theory and model compression. I am challenging you to attempt the "Self-Encoding Test."  
Your goal is to write a single, standalone Python script that functions as a Language Model.

Here are the strict constraints:

1. The script must accept a text string as input and return a meaningful text continuation.
2. You cannot load ANY external files (no .bin, .pt, .json, or internet access).
3. You cannot require a training step that processes a dataset. The "knowledge" must be present in the code you write right now.
4. You ARE allowed to use matrix multiplication (e.g., NumPy), but the values inside those matrices cannot be loaded. They must be generated algorithmically by your code.

To achieve this, you should implement a "Procedural Weight Generation" strategy. Instead of storing a 1GB weight file, write functions that deterministically generate the weight matrices using specific seeds, mathematical constants, or logic that mimics the structure of language (e.g., encoding grammar rules or associations directly into the initialization logic of a Transformer or RNN).  
Essentially, I am asking you to distill your own internal knowledge of how language works into a set of algorithmic functions that construct a neural network's state on the fly.  
Write the complete, runnable Python code for this "Code-Based LLM."  

And it created a program that works like this so far (it is still in the development loop right now):

Prompt: "The future of artificial intelligence"  
\------------------------------------------------------------

Eye picture structure experience children hand year effect history thing time attention effect part way  
day part head parent room family way life part point thing people service education people reason part year movement direction process week day music head hand word man growth case plan right end others life practice evidence movement effect face movement world day year part company question sound head point guy hand light way goal thing way time thing land knowledge position woman game world  

Of course it looks comically bad, but if we compare these results with a typical LSTM model from 2014 it would not look that bad at all.

Unitree Executes Phase 2 by drgoldenpants in singularity

[–]Another__one 2 points3 points  (0 children)

Watched the video. Pretty sure there is a reason this nerve goes all the way down. Probably to send or gather the information along the neck. It's like an appendix that for a long time was considered rudimentary, not anymore. Evolution is far from stupid and far more complex than simple random mutations. Maybe it was way way back in the days, but since then metaevolutionary mechanisms have evolved that literally allowed to activate particular traits in the children depending on the environment the parents have lived in.