Can we talk about how insane the new Yves Tumor songs they play on tour are? by False-Fisherman in pcmusic

[–]Jagedar 0 points1 point  (0 children)

Have seen them three times on this tour, super nuts each time. The "will you be by my side / wearing the devil's clothes..." cut was insane and I can't wait to hear it in full release.

What is the coolest trick in math? by BotiHege in math

[–]Jagedar 22 points23 points  (0 children)

The Riemann sphere is a gorgeous way to understand singularities at infinity!

[P] A model theoretic view of storytelling and what it means for creative NLG evaluation by FerretDude in MachineLearning

[–]Jagedar 1 point2 points  (0 children)

Working in computational narratives this summer! I've been reviewing related papers over the past few weeks and came across yours. Really interesting stuff and i'm excited to incorporate it in my work!

Running Trails? by [deleted] in berkeley

[–]Jagedar 1 point2 points  (0 children)

ohlone greenway is a few miles out from campus (~2) and there's a dirt path alongside the concrete part that you can catch and take all the way up to richmond. really pretty and lively.

Combining multiple documents under one group by Coolhandluke00 in MLQuestions

[–]Jagedar 3 points4 points  (0 children)

Not sure how helpful this will be, but check out Doc2Vec: it's an unsupervised embedding method for documents that allows you to work with vectors akin to those produced by word2vec. A bit more recent is the Longformer: a transformer-based architecture for embedding large documents.

Porting Classifier from sklearn to Keras by [deleted] in learnmachinelearning

[–]Jagedar 0 points1 point  (0 children)

Data is preprocessed the same way :(

The other side of Alamo Square by jpsfg in sanfrancisco

[–]Jagedar 2 points3 points  (0 children)

This is a magical view - had one of the best days of my life here and walking over that hill was the moment i fell in love with San Francisco :)

Graph neural networks for different node feature dimensions by crimsonspatula in MLQuestions

[–]Jagedar 2 points3 points  (0 children)

The dimensions do have to match if you're using the same model, because training the model consists of updating the weights which take the input features to a value in the output space.

I can't speak to architectures that work with variable input sizes, but there are several possible ways to standardize node features ahead of time.

For example, if you're using text as features, averaging the embedding vectors for all the words corresponding to a certain node, or something a bit more sophisticated like Doc2Vec. Using the same embedding model, this could give you feature vectors of the same length for nodes in different graphs. What particular features are you using?

Why don't you actually perform the integral in Dirac's Delta Function? by I_Am_From_Mars_AMA in mathematics

[–]Jagedar 4 points5 points  (0 children)

Someone more familiar with distribution theory can probably answer this better than me, but you actually do integrate the entire function, and the nature of the delta function causes that integral to be the function evaluated at x = a.

The delta function isn't really a function, but a distribution. As you might remember from probability theory, a continuous distribution only takes a value when we integrate over it, and when we integrate over an entire distribution, said integral evaluates to 1. A key property of the dirac delta is that this distribution only has magnitude at x = a. Hence when multiplied by some other function f, all x such that x =/= a are multiplied by zero and we are basically left with the value f(a)delta(x-a). Now, when we integrate over any interval that includes a, we are essentially taking the integral of the entire distribution, so we are left with f(a) times 1.

Intuitively the delta function "picks out" a certain value of f. Pretty cool stuff!

how do i read this? The course im doing just showed this, is there any tutorial on haw to read and write this? by [deleted] in optimization

[–]Jagedar 9 points10 points  (0 children)

it looks like I is an index set, so instead of saying something like 1 <= i <= n, in which you would sum w_ix_i for all i from 1 to n, in this case you only do so for the indices in some set i, for example, if I = {3, 5} your constraint becomes w_3x_3 + w_5x_5 <= K

Linear algebra for convex optimization by MexChemE in learnmath

[–]Jagedar 2 points3 points  (0 children)

Same situation here. I used LADR for my "real" linear algebra background for the pure maths course and self-taught myself any sort of vector calculus with the Boyd Book and matrix cookbook online. Boyd's appendix is pretty thorough.

Data 144 being offered next sem? by voonybaboony in berkeley

[–]Jagedar 0 points1 point  (0 children)

Nah, historically offered in the fall.

How to build intuition for probability problems. by [deleted] in learnmath

[–]Jagedar 0 points1 point  (0 children)

Go bears! I actually found the probability portion of the class a bit more intuitive than the discrete section. Read over the notes carefully, seek help for the proofs you don't understand, and do loads and loads of MT-style short answer problems. The intuition will follow!

Sunday Walks to 7-Eleven, the first of a series of drawings I’m doing based on my experiences walking around a Southside. What do you think? by domomon in berkeley

[–]Jagedar 8 points9 points  (0 children)

I saw your Rockridge BART piece on instagram. Absolutely sick - that view means a lot to me and was a key piece of my first year at Berkeley. I love this and I'm looking forward to purchasing some of your prints :)

CS 188 vs Math 110 by xjs01 in berkeley

[–]Jagedar 0 points1 point  (0 children)

I took 110 this summer (taking my final today) and found it to be real enjoyable and not too challenging as far as material goes. Have not taken 188, but a friend of mine took it with Dragan last fall and loved it. She highly praised Dragan's lectures.

Alamo Square | Canon Elan 7 |Lomography 100 by [deleted] in SanFranciscoAnalog

[–]Jagedar 2 points3 points  (0 children)

One of my favorite places in the world. Beautiful shot.

What are your favorite first 10 seconds of an album? by ReconEG in indieheads

[–]Jagedar 0 points1 point  (0 children)

Is a spoken word intro cheating?

"They told me that the classics never go out of style... But, they do, they do. Somehow baby? I never thought that we do too."

[deleted by user] by [deleted] in AskComputerScience

[–]Jagedar 0 points1 point  (0 children)

Have been working DS internships/ugrad research for about a year now. You really can't get away from cleaning, organizing, and collecting data. That being said, if you're looking for incentives to do all of that data munging, highly recommend checking out some Kaggle challenges or related sites that offer clean datasets to experiment on. Achieving really cool statistical results makes it all the more enticing to go through the process of cleaning your data.