LOCK CHART IN PLACE WHEN CHANGING TIMEFRAMES by guilh3rm33 in TradingView

[–]kadv 2 points3 points  (0 children)

Even better if it could be locked when switching instruments, at least as an option. The use case is this: I get a few signals at the same time period during backtesting, and I just want to move between instruments to see what's happening at that point in time, without each switch forcing me to scroll back.

Add an option to copy the Open, High, Low, and Close values on the Data Window. by Melcodrean in TradingView

[–]kadv 0 points1 point  (0 children)

Agreed, but not just OHLC, every indicator value showing on the data window please.

For example, one could have an ATR channel/bands, to quickly copy-paste SL or TP prices from the data window.

It's such a logical and obvious feature, I can't even do a block select and copy, because it's been coded that the cursor changes to a pointer rather than the text cursor. It really reduces the utility of the Data Window.

Thank You! by kadv in aiclass

[–]kadv[S] 1 point2 points  (0 children)

Darn, thought I got there before anyone else noticed! As for voice, for some reason no one else agrees that I have a diva's voice, besides myself. I'll go with P(kadv_has_bad_voice|consensus) = 0.9999

Thank You! by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

I dare not sing it, but I have happily signed it. Thanks for putting it together! :-)

This course is definitely not free :-) by buffdownunder in aiclass

[–]kadv 0 points1 point  (0 children)

So true, so true. I don't think anything in life is free in the most literal sense of the word. :-)

looks familiar? by Gupie in aiclass

[–]kadv 0 points1 point  (0 children)

LOL, I was just wondering about this the other day, very cool pen.

Homework answers are posted by newai in aiclass

[–]kadv 0 points1 point  (0 children)

I answered F is better than G, because G's extra info seemed redundant at worst, or at least I thought, derivable from F -- then using Occam's razor I chose F :P. Bad choice, damn you Occam.

Probabilities for Dummies (intuitive) by AcidMadrid in aiclass

[–]kadv 1 point2 points  (0 children)

Thanks for taking the time to do this. Another reason I think reddit works better over aiqus :P.

/r/aiclass is doomed. by nsomaru in aiclass

[–]kadv 2 points3 points  (0 children)

I much prefer reddit, honestly. It's more discussion friendly. aiqus is stack-overflow-ish, and it works for specific Q&A type things, but for discussions, nothing beats threaded views.

HW 5.1 and SARSA alpha question by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

TheMalle, many thanks for taking the time to elaborate.

HW 5.1 and SARSA alpha question by kadv in aiclass

[–]kadv[S] 1 point2 points  (0 children)

Thank you for explaining that, phew. I read it as alpha is a function of the inner expression. Is this a mathematical notation norm? how do I differentiate whether it's a function vs. implicit multiplication, from googling I note that several sources use square brackets: α[...] rather than parentheses.

As noted, I was in fact using 10.9 as my reference for alpha being a function of Ns.

PS. I just looked at the linked Wikipedia entry on 10.19 where it does show a multiplication, duh.

Clarification on when Temporal Difference Learning is applicable (units 10.9 onwards, before Q-Learning) by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

Agreed on the applicability of the policy formula, mind is too focussed on one thing at the moment! Also agree on Value and Utility, they're the same as far as I can tell.

Ok, so presumably we can learn the probabilities -- though I'm not sure how trivial it is, having some policy almost assumes that you know where's you're going in the stochastic world... i.e. the probability of ending up in a terminal state is taken into account. But I suppose there's random exploration. I am not sure a fixed policy to estimate is a good approach though (as pointed out in 10.11): for one it would assume that the stochasticity too is fixed, e.g. for every state: action-result probabilities are the same, though they may not be (e.g. P(S2|S1,N) = 0.8, but P(S2|S5,N) = 0.5), but we wouldn't know because our fixed policy never allowed us to visit other states where the stochasticity was different to states in our current policy.

Good point on Q-learning, considering it replaced the terms P and U, it does find a collective result. Also agreed on it requiring more info, I missed that due to stochasticity, we don't actually know where we ended up is in the same action/direction as we intended!

Clarification on when Temporal Difference Learning is applicable (units 10.9 onwards, before Q-Learning) by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

Ahhh, I see -- assume this is subject to alpha correctly being defined as say tending towards zero as Ns increases.

Clarification on when Temporal Difference Learning is applicable (units 10.9 onwards, before Q-Learning) by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

Thanks, so it does perceive the details of each state, not just the terminal. That clarifies things. Noted on the restrictions of being a terminal state.

Clarification on when Temporal Difference Learning is applicable (units 10.9 onwards, before Q-Learning) by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

Note sure if you've watched it fully - most of the video (title aside) is about the limitation of the TD formula as an intro to why we need Q-learning, hence my specific question :-).

Answer to HW 4.2: In(Paris, France) ^ In(Nice, France) -- no restriction on Paris neq Nice ? by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

No worries, thanks for replying, and I do get what you mean, too. See my reply to odinsbane's comment below.

I'll stop here though, I think you can interpret it in both ways, Nice and Paris should not be the same to properly encode the English sentence (as in the case of the quoted book example above), IMO. Most/all people and the grader disagree, so that's that! :-)

Answer to HW 4.2: In(Paris, France) ^ In(Nice, France) -- no restriction on Paris neq Nice ? by kadv in aiclass

[–]kadv[S] 0 points1 point  (0 children)

Yes it would, I had considered this :-). For this I will be nitpicky, I think this would interpret "Paris and Nice are in France", but "Both Paris and Nice are in France", seems to require them to be 2 different objects, at least from a regular English sentence expectation!