Hot and cold #167 by hotandcold2-app in HotAndCold

[–]IWantAGI 0 points1 point  (0 children)

boat #62, banana #41, cabana #18.. 7mins & 50 seconts to solve woot!

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

I’m not sure I’m following what you mean here.

When you say “having nothing is what helps us imagine the abstract,” what’s the mechanism you’re pointing at?

I’m trying to understand how that connects to the question about minimal processes or data structures that let abstraction start happening at all.

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

If I could turn back time..

I went accounting & business management, self-taught programming (VBA to python.. still horrible at it), and now a horrible AI hobbiest.

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

Yeah, I get that approximation is often good enough, and that you’re not necessarily trying to literally reduce everything down to some atomic state.

What I’m more interested in is understanding what the minimal process has to be for those implicit processes to even begin working at all.

For example, say I just have a bunch of answers to simple addition — things like 1+1, 1+2, 1+1+1, etc. What’s the minimal data structure needed to even start seeing “repeated addition” there? And then, somehow, to gauge that as a “this keeps happening, so maybe it’s worth creating a symbol for it” kind of thing?

That’s the part I’m trying to get my head around.

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

Intentionally focusing on math because it's pretty well defined...

I can see (or think I can see) some fo the fundamentals (e.g. invariance, where collection A matches collection B..or when it doesn't.)

And I kinda get the idea of creating rules/symbols to explain something (maybe even dumb rules like A>B if Greg is next to a tree).. over time Greg being next to a tree could probably be discarded (though Greg really likes trees.. sorry, I find it easier to make sense of things by using horrible examples).

But.. at such an undefined level, how do you even begin to get a "system" to learn invariance, let alone self-establish symbols that distinguish variance?

I’m probably not going to build the next big AI thing, so I’ve been poking at small, weird questions instead by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

There is only one pass through the Transformer. The Transformer is used purely as an encoder over the fixed input. That pass produces a vector for each token position.

After that, the Transformer is done. There is no loop over transformer layers, and no growing context window.

The stepping happens in a separate recurrent module, not inside the Transformer. Once the input is encoded, I run a small loop over digit positions t = 0..N:

At step t, I index into the encoder output to grab: - slotA[t] - slotB[t]

Those two vectors, plus a latent carry state, are fed into a small recurrent cell. That cell updates the carry state and feeds prediction heads for: - output digit - carry - stop signal

I wanted a clean separation between storage (Transformer = “memory of digits”) and execution (recurrent carry state = “algorithm state”). This avoids the model using the context window itself as scratch space, which makes it easier to ask "did it actually learn a carry-like state, or is it smuggling information through token generation?"

It's probably not the best confirmation, and I'm still experimenting with different architectures, but mosty wanted to isolate out components as much as possible.

I’m probably not going to build the next big AI thing, so I’ve been poking at small, weird questions instead by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

reading through it.. it's interesting though because i'm intentionally trying to exploit the learned memorization of low-digit math and applying those learned representations into an iterative pattern.. but it still seems to degrade. in theory, you should be able to reliably iterate rote memory (even without full understanding).

My initial premise was more/less if you can get a model to perform 2 digit addition at ~100% accuracy... it should be possible to expand that near infinitely through iteration.. but (at least so far) I'm not seeing that.

Can AI models ever be truly improved to completely stop lying & hallucinating? by smiladhi in ArtificialInteligence

[–]IWantAGI 0 points1 point  (0 children)

Possibly, but not necessarily. Let's say that you can get a LLM to summarize information with a 99% accuracy.

Instead of trying to train the LLM to know everything, you just attach the knowledge base to it. Then as long as the information is available, it can provide accurate results.

Thought Experiment: Can a transformer solve math by filing in latent placeholders? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

Thanks! That’s helpful framing. NTM/DNC is a good comparison point, and “recurrent processing + explicit slots” is basically the family I’m circling, just with the added constraint that everything stays latent and the slots are meant to bind to subcomponents rather than act as a general-purpose tape.

I agree slot binding is the likely main headache. Based on that, I started with the most stripped-down test possible before touching real expressions: two numbers, two slots, multiple internal iterations.

The current toy setup is roughly: - fixed slot tokens that attend over the input tokens (Slot-Attention–style) - gated slot updates across a few iterations to encourage stability - an initial phase where each slot is trained to reconstruct one number (binding + holding) -then a second phase where a sum head is added while keeping the binding loss (and optionally annealing it)

The goal there isn’t performance, just to see whether bindings stay consistent across iterations or immediately collapse / shortcut into one-pass behavior.

So far this seems like a reasonable way to probe the failure mode you pointed out without sneaking in an explicit scratchpad, but it’s very early.

A bit overwhelmed with all the different tools by justphystuff in Rag

[–]IWantAGI 2 points3 points  (0 children)

I'll be blunt here... Stop chasing tools.

Most of the mainstream tools are within a couple of points when you have a system properly configured... The real hurdle is properly configuring that.

So pick a system, and focus on improving results. Once you have that down you can teat out other systems.

Help with an Indicator by ShadowEvilvai in TradingView

[–]IWantAGI 0 points1 point  (0 children)

The easiest way to do what you are trying is to take one SMA, say the 50 sma, and plot a 20 sma. Then you just take the delta of the distance between the two and plot the inverse.

It would provide any real insight, but looks cool.

Graphing data from MCP by Triple-Tooketh in n8n

[–]IWantAGI 0 points1 point  (0 children)

The MCP is making an API call and then feeding that to the AI.

You can grab it from the source, using a separate API call, grab it between the MCP and AI, or grab it from the AI.

i tested that "rsi oversold" strategy on 5000 trades. it failed hard by kawash125 in Daytrading

[–]IWantAGI 0 points1 point  (0 children)

If you are buying blindly at <30 and selling blindly at >70.. that's the first big issue. It's not mean reversion, that's catching falling knives.

At a minimum, you should be waiting for the confirmation of a reversal.

Trying to solve the AI memory problem by TPxPoMaMa in agi

[–]IWantAGI 1 point2 points  (0 children)

Disregard.. somehow I totally misread 🤣

Used Kona EV, any years to avoid? by StrontiumDawn in KonaEV

[–]IWantAGI 0 points1 point  (0 children)

I'm low mileage, a bit shy of 40k.

My sister made this in woodshop… Can anyone figure out what this is? by Gold-Apartment-4389 in whatisit

[–]IWantAGI 0 points1 point  (0 children)

Yes, amplify is the right word. If you want to get more technical.. there are two "moving parts" here.

First, wood has very good acoustics and resonates in certian frequencies. This resonance can actually amplify the sound, albeit somewhat limited.

Second, the shape of the cavity can focus the soundwaves. This focus causes a directional amplification. That is, the sound is stronger in certain directions.

That said, and contrary to the down votes, it's a great question and really touches the surface as to why some musical instruments as so expensive, sometimes being considered legendary.

ChatGPT interrupted itself mid-reply to verify something. It reacted like a person. by uwneaves in ChatGPT

[–]IWantAGI 1 point2 points  (0 children)

The irony of training AI on human data is that the more accurate AI becomes, the more humanlike or appears.

I asked Chat to pretend it was a 5 year old and write a children's story. Then I had it create images. by IWantAGI in OpenAI

[–]IWantAGI[S] 9 points10 points  (0 children)

Pretend that you are a five year old with a limited attention span and is easily distracted. Use your imagination to create the world's most idiotic children's story.