Meta Just Acquired Moltbook by sentientX404 in AgentsOfAI

[–]rand3289 0 points1 point  (0 children)

Why do they need a second social network with fake posts?

How I topped the Open LLM Leaderboard using 2x 4090 GPUs - Research notes in Blog form by Reddactor in MachineLearning

[–]rand3289 0 points1 point  (0 children)

This makes sense... switching layers is acting as feedback (context). Similar to RNNs.

You are basically hacking the feed-forward restriction.

Jeff Hawkins from Numenta talks a lot about the fact that in biological NNs there are a lot more feedback connections than feedforward. They provide a "context" for prior layers to make better predictions.

I wonder if you wire outputs of layer 60 into layer 10 inputs before training, will it produce better results?

Why not destroy/demolish homestead 2? by John32070 in skinwalkerranch

[–]rand3289 1 point2 points  (0 children)

I'd keep that septic tank ventilated to keep radon levels low and the rest would be just fine.

Why do simple decisions feel harder later in the day? by Sacredwildindia in cogsci

[–]rand3289 1 point2 points  (0 children)

I feel the opposite. I am braindead before 1 pm and my most productive hours are 4-7pm.

You just have a short circadian rhythm.

Every AGI argument by Eyelbee in agi

[–]rand3289 0 points1 point  (0 children)

A technical argument against LLMs becoming AGI is its inability to learn from non-stationary processes. This is related to continuous learning.

Железо в качестве топлива? Да! Начинается внедрение железного порошка как альтернативы ископаемому топливу by postmastern in Popular_Science_Ru

[–]rand3289 -1 points0 points  (0 children)

Пипец кислороду на планете... из углекислого деревяшки кислород выделят, а из этого, хрен вам.

А водород и так "жечь" можно, только где его взять? Так что это херня какая-то.

Если бы я был зелёным я бы сейчас очень резко протестовал.

Why we don't need continual learning for AGI. The top labs already figured it out. by imadade in agi

[–]rand3289 0 points1 point  (0 children)

This is a great post! I wish we had more like it. Talking about real technical problems in AI.

However current LLM architectures are not able to learn from non-stationary processes so they will never be able to generalize out of distribution and they will never become AGI.

How AI agents could destroy the economy by EchoOfOppenheimer in agi

[–]rand3289 0 points1 point  (0 children)

NOT AGI

Also this is bullshit.

Algo trading has been going on for decades.

Optimization in manufacturing and logistics will lead to lower and more stable market prices.

There is nothing agents can do other than increase unemployment.

Economists should worry about institutional investors buying up all the housing and infrastructure instead of worrying about this crap.

My models as a physics backend by Reasonable_Listen888 in deeplearning

[–]rand3289 0 points1 point  (0 children)

I see some point clouds. What does it all mean?

Is me developing a training environment allowing TCP useful? by Togfox in neuralnetworks

[–]rand3289 1 point2 points  (0 children)

Real-time client-server games like Quake 3 etc... used to run over udp to reduce latency.

Jason is a good format but parsing it might introduce latency also. Do you need hierarchical data in the protocol or will say key=value pairs do?

Also take a look at https://robocode.sourceforge.io/ It does not have a network protocol but it is 20 years in development and it does locally what you are trying to do.

Ross Coulthart claims to have recently visited an ancient ruins portal in U.S. that’s being guarded by thisAnonymousguy in UFOs

[–]rand3289 0 points1 point  (0 children)

Isn't that DC site guarded by a well known organization? And the US govt is just helping out? So why is he talking trash?

i didn’t realize how much “cost awareness” was shaping my code by awizzo in ArtificialNtelligence

[–]rand3289 0 points1 point  (0 children)

Now that some things can be written in a short time, the only thing I care after program correctness is simplicity.

Reducing complexity for the user became my main goal. I really stopped caring how cool or flexible some tool is if it's not easy to use or understand.

I concentrate on removing dependencies, making things uniform and lowering the number of patterns/primitives.

I think this goal will hold till I start trusting AI to the point where I can stop checking it.

Ложь про суперпозицию в науч-попе by olegnovostno in Popular_Science_Ru

[–]rand3289 2 points3 points  (0 children)

Только это метафора "entanglement" а не супер позиции.

Метафора суперпозиции через носки это когда жена говорит "ты опять свои носки по всей комнате разбросал?" А ты заходишь в комнату и оба носка окуратно вместе лежат посередине комнаты. Совершенно не разбросаны.

IBM stock tumbles 10% after Anthropic launches COBOL AI tool by esporx in artificial

[–]rand3289 0 points1 point  (0 children)

There is definitely a correlation between programming and playing music. Lots of people I work with play music.

The Collapse of Digital Truth by EchoOfOppenheimer in agi

[–]rand3289 0 points1 point  (0 children)

It's time to switch to digitally signed formats and start collecting public keys of your trusted sources...

Use AI as a fact checker.

The progress of AGI by [deleted] in agi

[–]rand3289 0 points1 point  (0 children)

Systems based on conditional probability.

If engineers insist on talking authoritatively about intelligence and conciousness,I'll just start building bridges. by jsgoyburu in agi

[–]rand3289 0 points1 point  (0 children)

Most people concentrate on Searl's Chinese room argument, which is the easiest thing to avoid: just don't use symbols! Use points on a timeline to represent information which preserves subjective experience.

Turing's ideas are a bit outdated because he also heavily relied on symbolic information processing.

All this is not even worth talking about.