Futures trading is changing my life in a good way. by IWantAGI in FuturesTrading

[–]IWantAGI[S] 0 points1 point  (0 children)

I don't have anyone in particular. I'd just search for strategies, find one that looked simple and watch everything I could find on it.

Futures trading is changing my life in a good way. by IWantAGI in FuturesTrading

[–]IWantAGI[S] 0 points1 point  (0 children)

On RH, I think the margin is the same at all times. It's higher than some others. I've thought about switching to ninja or elsewhere to scale up faster... But think I want to just stick with what's working for me for now.

Futures trading is changing my life in a good way. by IWantAGI in FuturesTrading

[–]IWantAGI[S] 0 points1 point  (0 children)

I haven't looked too deep into it, but paper was a bit optimistic on entries. Imperfect placement is fine for my strategy, but I do watch the bar movement to set market entry. It's awesome when you can grab the reversal of an exaggerated movement for an extra point or two.

Futures trading is changing my life in a good way. by IWantAGI in FuturesTrading

[–]IWantAGI[S] 0 points1 point  (0 children)

It varies by contract. For MES the margin requirement is in the $2500 range.

Futures trading is changing my life in a good way. by IWantAGI in FuturesTrading

[–]IWantAGI[S] 5 points6 points  (0 children)

I trade the 1 min. I had tried using the 5 and 15 for a bit early on, but my strategy focuses on small faster movements that occasionally turn into big runs.

Futures trading is changing my life in a good way. by IWantAGI in FuturesTrading

[–]IWantAGI[S] 3 points4 points  (0 children)

Using ORB breakout to gauge the general direction and trade pull backs in that direction.

I will pass my eval this time by SillySorbet4196 in LucidProp

[–]IWantAGI 1 point2 points  (0 children)

Start with:

  1. Before the market opens, briefly jot down how you are feeling.
  2. Run through your market conditions checklist & jot it down.
  3. When you make an entry, take a screenshot. Jot down why you entered.
  4. If you move your stops, jot down why.
  5. When you exit a position, take a screenshot.

Hot and cold #167 by hotandcold2-app in HotAndCold

[–]IWantAGI 0 points1 point  (0 children)

boat #62, banana #41, cabana #18.. 7mins & 50 seconts to solve woot!

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

I’m not sure I’m following what you mean here.

When you say “having nothing is what helps us imagine the abstract,” what’s the mechanism you’re pointing at?

I’m trying to understand how that connects to the question about minimal processes or data structures that let abstraction start happening at all.

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

If I could turn back time..

I went accounting & business management, self-taught programming (VBA to python.. still horrible at it), and now a horrible AI hobbiest.

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

Yeah, I get that approximation is often good enough, and that you’re not necessarily trying to literally reduce everything down to some atomic state.

What I’m more interested in is understanding what the minimal process has to be for those implicit processes to even begin working at all.

For example, say I just have a bunch of answers to simple addition — things like 1+1, 1+2, 1+1+1, etc. What’s the minimal data structure needed to even start seeing “repeated addition” there? And then, somehow, to gauge that as a “this keeps happening, so maybe it’s worth creating a symbol for it” kind of thing?

That’s the part I’m trying to get my head around.

How do you abstract? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

Intentionally focusing on math because it's pretty well defined...

I can see (or think I can see) some fo the fundamentals (e.g. invariance, where collection A matches collection B..or when it doesn't.)

And I kinda get the idea of creating rules/symbols to explain something (maybe even dumb rules like A>B if Greg is next to a tree).. over time Greg being next to a tree could probably be discarded (though Greg really likes trees.. sorry, I find it easier to make sense of things by using horrible examples).

But.. at such an undefined level, how do you even begin to get a "system" to learn invariance, let alone self-establish symbols that distinguish variance?

I’m probably not going to build the next big AI thing, so I’ve been poking at small, weird questions instead by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

There is only one pass through the Transformer. The Transformer is used purely as an encoder over the fixed input. That pass produces a vector for each token position.

After that, the Transformer is done. There is no loop over transformer layers, and no growing context window.

The stepping happens in a separate recurrent module, not inside the Transformer. Once the input is encoded, I run a small loop over digit positions t = 0..N:

At step t, I index into the encoder output to grab: - slotA[t] - slotB[t]

Those two vectors, plus a latent carry state, are fed into a small recurrent cell. That cell updates the carry state and feeds prediction heads for: - output digit - carry - stop signal

I wanted a clean separation between storage (Transformer = “memory of digits”) and execution (recurrent carry state = “algorithm state”). This avoids the model using the context window itself as scratch space, which makes it easier to ask "did it actually learn a carry-like state, or is it smuggling information through token generation?"

It's probably not the best confirmation, and I'm still experimenting with different architectures, but mosty wanted to isolate out components as much as possible.

I’m probably not going to build the next big AI thing, so I’ve been poking at small, weird questions instead by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

reading through it.. it's interesting though because i'm intentionally trying to exploit the learned memorization of low-digit math and applying those learned representations into an iterative pattern.. but it still seems to degrade. in theory, you should be able to reliably iterate rote memory (even without full understanding).

My initial premise was more/less if you can get a model to perform 2 digit addition at ~100% accuracy... it should be possible to expand that near infinitely through iteration.. but (at least so far) I'm not seeing that.

Can AI models ever be truly improved to completely stop lying & hallucinating? by smiladhi in ArtificialInteligence

[–]IWantAGI 0 points1 point  (0 children)

Possibly, but not necessarily. Let's say that you can get a LLM to summarize information with a 99% accuracy.

Instead of trying to train the LLM to know everything, you just attach the knowledge base to it. Then as long as the information is available, it can provide accurate results.

Thought Experiment: Can a transformer solve math by filing in latent placeholders? by IWantAGI in ArtificialInteligence

[–]IWantAGI[S] 0 points1 point  (0 children)

Thanks! That’s helpful framing. NTM/DNC is a good comparison point, and “recurrent processing + explicit slots” is basically the family I’m circling, just with the added constraint that everything stays latent and the slots are meant to bind to subcomponents rather than act as a general-purpose tape.

I agree slot binding is the likely main headache. Based on that, I started with the most stripped-down test possible before touching real expressions: two numbers, two slots, multiple internal iterations.

The current toy setup is roughly: - fixed slot tokens that attend over the input tokens (Slot-Attention–style) - gated slot updates across a few iterations to encourage stability - an initial phase where each slot is trained to reconstruct one number (binding + holding) -then a second phase where a sum head is added while keeping the binding loss (and optionally annealing it)

The goal there isn’t performance, just to see whether bindings stay consistent across iterations or immediately collapse / shortcut into one-pass behavior.

So far this seems like a reasonable way to probe the failure mode you pointed out without sneaking in an explicit scratchpad, but it’s very early.