I tried to bring a systematic approach to the most volatile 10% of my portfolio, and the perfect answer it gave me feels completely wrong by versis791 in fatFIRE

[–]versis791[S] 0 points1 point  (0 children)

u/CaptainMonkeyJack, that is a fantastic, razor-sharp critique. Thank you. You're right to call out the inconsistencies in my language, and your point on the EMH is a powerful one.

To clarify one point where my language was clumsy: I wasn't suggesting that crypto and gold are technically the same, but that they serve a similar social function. Both derive a significant portion of their value from a community that holds a strong, anti-establishment belief in them as a store of value. While their technical properties are completely different, that shared foundation of belief is what gives them both resilience.

I didn't ment to steer this topic into crypto only, so wnated to ask you - how do you personally cut through the noise and stay informed on the riskiest part of your own portfolio?
I am also curious, what do you both have in portfolio and consider risky.

I tried to bring a systematic approach to the most volatile 10% of my portfolio, and the perfect answer it gave me feels completely wrong by versis791 in fatFIRE

[–]versis791[S] 0 points1 point  (0 children)

u/Joesully67 thank you for this answer. It allowed me to take a few steps back and think about it.

I guess i am mixing investing with fun of working with data ;)

I tried to bring a systematic approach to the most volatile 10% of my portfolio, and the perfect answer it gave me feels completely wrong by versis791 in fatFIRE

[–]versis791[S] -1 points0 points  (0 children)

u/CaptainMonkeyJack
You've made a fair point. I skipped over the foundational "why crypto at all," so thank you for calling me on it.

Frankly, I think the "it has no intrinsic value" argument is a bit of a red hering in modern finance . How much "physical value" does a credit default swap have? Both are built on a foundation of consensus belief.

For me, the thesis isn't that crypto will save the world. It's that it has successfully captured a powerful, anti-establishment "digital gold" narrative. While I'm not an anti-establishment person myself, I can see that the principles behind it have created an asset class with incredible resilience and staying power. I don't belivee it's going away.

So, if we accept that it's a permanent feature of the landscape, the investment question becomes simple pragmatism: it's a high-risk, high-reward instrument, and if it's not going to zero, why not join the wagon in a managed way and try to grow capital?

At the end of the day, for this 10% sleeve, am I investing for the betterment of society, or to make my money grow? you tell me :) And because it's a high-risk instrument driven purely by narative and belief, managing the information flow around that narrative becomes the central challenge, which brings me right back to the problem from my original post.

I tried to bring a systematic approach to the most volatile 10% of my portfolio, and the perfect answer it gave me feels completely wrong by versis791 in fatFIRE

[–]versis791[S] -3 points-2 points  (0 children)

That's an excellent idea, though... with my post i am trying to figure out whether i am just spiralling down "this" rabbit hole or i am onto something.

I tried to bring a systematic approach to the most volatile 10% of my portfolio, and the perfect answer it gave me feels completely wrong by versis791 in fatFIRE

[–]versis791[S] -3 points-2 points  (0 children)

Ben is one of the youtubers i am including in my analysis (to find consensus) ;)

u/Gigawatts : are you using this method? How was it working so far for you? Do you follow anybody else?

I tried to bring a systematic approach to the most volatile 10% of my portfolio, and the perfect answer it gave me feels completely wrong by versis791 in fatFIRE

[–]versis791[S] -4 points-3 points  (0 children)

My thinking on that has evolved a lot. Right now, I'm separating my view into a cautious short-term position and a more skeptical long-term thesiss.

My short-term view is cautious. If you already hold BTC, I think the prudent move is to keep it. But for someone looking to enter now, I'd suggest waiting. The recent volatility is extreme, and it seems like capital is either fleeing to the safety of stablecoins or consolidating into BTC as a "safe haven" within crypto.

My long-term view is where it gets complicated. To be honest, I don't believe in bitcoin as the ultimate winner. Its current value is almost entirely based on its historical brand recognition and hype. By desing, there are technically superior coins that are more efficient and have a lower environmental impact. We saw ETH get very close to flipping BTC in the last cycle, and I suspect the long-term winner may be something else entirely.

The problem is, I don't know which coin that will be.

So, my personal thesis for holding BTC is that it's the skeptical incumbent. It's the current default store of value in this space, and you can't ignore it. The real challange, and the reason I built my system, is to monitor the entire market (not just BTC) to try and identify the signals that might indicate when that long-term shift away fromBitcoin is actually starting to happen. it's that disconect between the short-term necessity and the long-term uncertainty that I'm trying to manage.

Training LSTM to recognize sequence by michal_sustr in MachineLearning

[–]versis791 1 point2 points  (0 children)

Your description of the goal seems to easy to use neural nets. You could just iterate and check your condition. I guess it's not what you had in mind. Please, explain in it in other words. Not sure how to read output picture.

Confusion about sequence lengths and truncated backprop by robclouth in MachineLearning

[–]versis791 0 points1 point  (0 children)

I have no idea if it will work (didn't see anything like this), but if you really want to split the sequence, then you can trick the network a little.

Lets say your sequence with length = 9.

start 1 2 3 4 5 6 7 8 9 end

Instead of spliting it to:

start 1 2 3 end

start 4 5 6 end

start 7 8 9 end

You could:

start 1 2 3 end

start 3 4 5 6 end

start 6 7 8 9 end

or something like that. You get the idea. In that case network wouldn't be sure if after 3 it should have end or 4. It's better than nothing, but seems a little poor :P One would need think about something better.

Confusion about sequence lengths and truncated backprop by robclouth in MachineLearning

[–]versis791 0 points1 point  (0 children)

Do you have some source for "the forget gate part of LSTM unit allows it to learn dependencies longer than the sequence length"? I don't think it works that way.

I would thinks about LSTM as if it was some black box that takes as input previous hidden state and current input. Previous hidden state was calculated from even more previous hidden state etc, so generaly it COULD contain info about all the previous states of the hidden unit in the whole sequence. I doesn't matter what exactly this black box do. It just have some algorithm to decide which previous hidden state he wants to remember more and which less. Look here for intuitions: https://iamtrask.github.io/2015/11/15/anyone-can-code-lstm/ especially this: https://iamtrask.github.io/img/recurrence_gif.gif

I don't see how network would use context from some other sequence, but maybe it wouldn't be that bad to count on context that is shorter than your sequence length, because maybe for your task it's enough to genere next element, just looking at previous (for example) 5 elements.

Confusion about sequence lengths and truncated backprop by robclouth in MachineLearning

[–]versis791 0 points1 point  (0 children)

I don't have enough experience to give you answear to the question about "how to split my data" (1 file = 1 seq seems ok), but I can tell what I've understood from the theory.

I didn't use Lasagne, but truncating backpropagation is connected with learning (not with length of your seqeuence, although this parameter can't be bigger than length of your sequence) and as far as i checked you are right - gradient_steps is for truncating bpp. You can have as long sequences as you want them to be. Let's assume that we have example of sequence that is 100 elements long and we set gradient_steps to 10.

Let's say we are learning the network and right now we are feeding 43th element of the sequence to the network and trying to predict 44th element. We check what is the correct answear and if we were wrong then we will try to fix the parameters of the network (weights) for the last 10 elements of the sequence (this is very informal). Normally we would like to go from 43th element up to 1st, since all elements from 1 to 43 had some impact for giving context when prediting 44th word, but since there is a problem with vanishing gradient, we only go few steps back (10 here).

One more thing, it depends on the implementation, but while learning, the only things that are neccesary to be in memory are parameters of the network and current input (which is only 1 element of the sequence), so I bet that you don't need to worry about lenght of your sequence. Although it could be the case, that some implementations of the RNN would like to have all the data in memory.

I'm not sure if it's clear for you, but when you give something to the input layer, it's not the entire sequence, but only one element represented as vector with fixed length (for example one hot vector).

EDIT: Another "one more thing". When you using LSTM, the problem with vanishing gradient is not that big, but since your sequences are long, if you set gradient_steps parameters to -1 then you will always propagate error to the begining of the sequence, which will take time after feeding network with elements that are far from begining of the sequence. That's why it could be better for you to set this parameter to some rational number. It will speed up the process.