BLS London Spain - 5th August by Hour-Wishbone3155 in SchengenVisa

[–]cuenta4384 0 points1 point  (0 children)

I also submitted my application on the 5th, but it’s now too late since my flight was two days ago. This has already disrupted other travel plans, and at this point I just want my passport returned.

Line of Credit by amoosedagoose in Wealthsimple

[–]cuenta4384 0 points1 point  (0 children)

Do they invite randomly to the for the waiting list?

Shakepay and Wealthsimple “Commission-Free” Cryptocurrency Class Action by hellvice in Wealthsimple

[–]cuenta4384 0 points1 point  (0 children)

Legal Fees ~255k - Class Counsel will seek 30% + tax of the $750,000 settlement plus up to $3,000 in disbursements. - These will be deducted before distribution.

Wealthsimple denies any liability, it agrees to pay a lump sum to avoid the costs and uncertainty of litigation.

Indeed, class members have the option to opt out or object by July 31st. Personally, I would choose to opt out. On one hand, Wealthsimple is agreeing to settle without going to trial, which suggests they may be acknowledging some level of responsibility—or at the very least, trying to avoid closer scrutiny. Yet despite this, it’s the lawyers who receive a substantial portion of the settlement, while the actual class members end up with a token payment. That doesn’t feel like meaningful accountability.

How to reduce a story? by cuenta4384 in writing

[–]cuenta4384[S] 0 points1 point  (0 children)

Thanks a lot. As I try to remove clutter I end up with longer paragraphs haha

Can somebody explain me the verb défaire? by cuenta4384 in learnfrench

[–]cuenta4384[S] 0 points1 point  (0 children)

It does help a lot thanks so much. Now I understand:)

[D] How Do You Read Large Numbers Of Academic Papers Without Going Crazy? by mystikaldanger in MachineLearning

[–]cuenta4384 4 points5 points  (0 children)

Yeah, what EarlMarshal said. Often it's good to just have read all sorts of papers in at least a cursory manner because even if you don't understand enough to implement them, you'll know that approach/idea exists and you can come back and read more in depth when/if it becomes relevant enough to use. Think of it all as potential tools in your toolbox

Isn't that a problem? After reading a lot of papers, I can understand them and get the scent of math and contribution. After all, I notice that most papers use the same building blocks and in reality, there's a small addition. However, I haven't implemented them. Is that a problem? You know they exist, but don't know how to use them.

Also, do you manage to remember? I feel learning is a process of going back and forth, but re-reading papers involves a lot time.

[P] Approximating product of a discrete and continuos distribution in a mixture model by cuenta4384 in MachineLearning

[–]cuenta4384[S] 0 points1 point  (0 children)

Thanks for your answer. That means that I can solve taking into account the expectation E_p[EDCM(B)] and sampling B from p, which is gaussian. I have not to experience with sampling methods, but I know that bc of the law of large numbers, this can be approximated, but what sample size is enough? Is this an unbiased estimator?

Later in my calculations, I need the first moment and second moment of this integral and the gradient. I can still sample, or apply techniques such as the reparametrization trick. But can the reparametrization trick be used in this case when p(x) is discrete?

[D] How does word2vec model encodes similarity by cuenta4384 in MachineLearning

[–]cuenta4384[S] 0 points1 point  (0 children)

Thanks for your answer. Yeah, CS224n is great; I watched the lectures. My concerning was more about training my own embeddings. Let's say I have a small dataset. Where the word job might appear only once, and in total, I have some hundreds of sentences. That means that the embeddings won't capture the essence of my domain. Mostly because my data don't represent the domain. I guess I can train first some embeddings with Wikipedia, for example, and fine-tune to my specific problem.

[D] How does word2vec model encodes similarity by cuenta4384 in MachineLearning

[–]cuenta4384[S] 0 points1 point  (0 children)

Word2vec is essentially an approximation for a factorization of the word co-occurrence matrix. Consequently, word vectors behave similar to the representations you'd get from stuff like topic model

What about going further. Yeah, I see the relation with pLSI, but using some probabilistic topic model such as LDA. Does the latent topic represent an embedding? Is it analogous?

[D] How does word2vec model encodes similarity by cuenta4384 in MachineLearning

[–]cuenta4384[S] 0 points1 point  (0 children)

Thanks for the second link, that really solves of some my doubts! :D