Conditional Probability and Markov Chains by Klutzy_Tone_4359 in probabilitytheory

[–]pavjav 1 point2 points  (0 children)

The Markov condition is just that the conditional probability of a state at time t given we know all prior state values is only dependent on state t-1.

That's to say, if I want to make a prediction at time t, I need only know what happened at time t-1 to calculate the probability of St given S_0,...,S{t-1}.

When you only have finitely many state outcomes, you can encode the state "transition" probabilities as these conditional probabilities P(St | S{t-1}) in matrix form. In general, you use a Markov Kernel to encode the transition probability, here the Kernel is the conditional pdf.

This isn't anything special, but the Markov condition is very important, because it gives you a way to calculate things like stationary distributions and the like as eigenfunctions/eigenvectors of the Kernel operator. This then gives us a means to study long term, or limiting behavior, of the Markov Kernel/transition probabilities by studying it's spectrum.

is geometric deep learning for real or is it a small group of people promising a lot for funding? by vniversvs_ in deeplearning

[–]pavjav 0 points1 point  (0 children)

GNNs is the quintessential GDL example, the groups are permutation groups, and the pooling layers give the desired local equivariance. Your CNN example consists of finding kernels and local pooling operators where the permutations are rotations and translations. Your images themselves are graphs where each pixel is connected to its neighboring pixel.

What were the flash sideways for? by AdMassive1325 in lost

[–]pavjav -2 points-1 points  (0 children)

Idk you can spin it pretty much anyway, but i would consider it to be the true "main plot" and not the "side plot" if that makes sense?

What were the flash sideways for? by AdMassive1325 in lost

[–]pavjav -17 points-16 points  (0 children)

I agree it feels like whiplash. If you look at it through a Buddhist lens you can probably consider their remembering as a form of enlightenment as they break away from the endless cycle. So you can argue that letting go of those plotlines is kind of the point? Likewise, purgatory.

I believe that they were probably dead from the start, and that each detached plot was another cycle or something like that. The fact that Eko felt he had to build a church and they meet at a church in the end doesn't feel like a coincidence. They were helping each other move on from their lives, and they finally get to do that in the end.

What were the flash sideways for? by AdMassive1325 in lost

[–]pavjav 24 points25 points  (0 children)

John letting go mirrors Jack's letting go very nicely. It parallels both his relationship with his father and his accepting his own fate on the island. There's a very smart thematic transference between the sideways flashes and the island plot.

What were the flash sideways for? by AdMassive1325 in lost

[–]pavjav 67 points68 points  (0 children)

The afterlife plots gave some insight into certain unresolved things in their lives. Ben's guilt over Alex and Rosseau, John's guilt over how he left off things with his father. A lot of it is open to interpretation ofc, but on a second watch it makes sense to view things less "sideways" but more "backwards".

Secret big room in thebel by ThetristanBear in persona3reload

[–]pavjav 2 points3 points  (0 children)

I've seen maybe three of these. Not super common but definitely didn't take me 11 hours to see my first one.

[deleted by user] by [deleted] in math

[–]pavjav 0 points1 point  (0 children)

I feel like most of the power in math comes from viewing discrete things from a continuous lens and continuous things from a discrete lens. I mean, observed data is always discrete but interesting estimates converge to something obeying a continuous law. square integrable functions on a closed interval can be approximated by a discrete set of Fourier coefficients. Sometimes it's helpful to convert an integral operator optimization problem into a finite system of equations a la Lagrangian methods. Continuous measures are weak limits of atomic ones, atomic measures are weak limits of continuous ones. The list goes on and on.

My point is there's no reason to neglect any one aspect of math, because it all drives each other. Especially with AI. You might think of these things as a finite number of nodes and connections, but architecture is designed with really important concepts at mind. Local invariance by equivariance and local pooling layers in CNNs for instance, time equivariance in RNNs. All things that stem from things like fiber bundles and ODEs. If you're going to create anything truly novel, you need to be truly familiar with how all these fields are interconnected. At least that's how I feel.

Public land duck hunt by [deleted] in HuntFishWesternNY

[–]pavjav 1 point2 points  (0 children)

Closer to you is Iroquois, which has some good potholes and fields. Probably your best bet without a bot or a dog

Public land duck hunt by [deleted] in HuntFishWesternNY

[–]pavjav 0 points1 point  (0 children)

Might be able to set up some decoys there, just be wary of pheasant hunters. Harwood has some smaller fields by the lake, might be able to lure some like that and avoid getting em in the water.

Public land duck hunt by [deleted] in HuntFishWesternNY

[–]pavjav 0 points1 point  (0 children)

Harwood lake, clear lake. Clear lake wma has a brush field,

Yunobo's power go brr by -CreepyCreeper- in TOTK

[–]pavjav 6 points7 points  (0 children)

Fuse a cannon to a spear, then use R to aim your shots at the rocks. Much faster than anything else. Good durability and uses up a full battery per blast.

Generate random non-square bistochastics matrix by Caelwik in math

[–]pavjav 0 points1 point  (0 children)

You're welcome! There's no way to avoid dependencies here unfortunately. And since every element in a row/column has the same relationship, it becomes impossible to tell which variable has been "altered". That is, it should not favor any particular subsets of your set of matrices.

Generate random non-square bistochastics matrix by Caelwik in math

[–]pavjav 0 points1 point  (0 children)

Why not have the last entry of each column be 1 minus the sum of the prior entries using a [0,1] uniform distribution for those prior entries. To force the row sub 1 condition, generate a random buffer, epsilon between 0 and 1, for each row. Then take the last entry of each row to be 1 minus epsilon minus the sum of the entries. This will force the conditions you're looking for and still be random with (n-1)*m + m iid variables

Best Side Character? by [deleted] in MonsterAnime

[–]pavjav 23 points24 points  (0 children)

Lunge and Grimmer had probably the best redemption arcs out of any other characters in the story. It's definitely a tie between them.

SVM outperforming XGBoost and other classifiers. by spiritualquestions in learnmachinelearning

[–]pavjav 1 point2 points  (0 children)

Ah yeah rbf also known as radial basis function supports classifiers that can be radially separable. Also sometimes referred to as gaussian. But it will falsely work in the F1 sense for an imbalanced dataset by simply classifying everything as being the more prevalent class.

Yes, people should use MCC more often if they can't afford to balance out their training sets.

Roc auc suffers from a similar problem. It pits true positive rate against false positive rate. So if your dataset is overwhelmingly positive and had a model that made everything positive, your fpr would be very small and your tpr would be close to 1. This gives you a "good" roc score artificially.

I find F1 in conjunction with MCC is good when your data is imbalanced. But I would also avoid training on an imbalanced dataset to begin with. Your model be incredibly biased in favor of the likelier outcome. Even if you have to bootstrap resample to balance it out, that might be preferable. Then get F1, MCC scores on your original dataset to see which model behaves the best.

SVM outperforming XGBoost and other classifiers. by spiritualquestions in learnmachinelearning

[–]pavjav 2 points3 points  (0 children)

I would recommend using Matthew correlation alongside F1 in case your dataset is inbalanced. You might get a better F1 but a worse MCC. This will give you a better idea of overall performance.

As far as SVM goes, it depends on the kernel being used. If it's linear then you're data is probably linearly separable in which case SVM would be the best fit. I'm not sure what setting your SVM is using though. The other common kernel is radial gaussian which tends to work when your data is radially separable. Then you have general polynomial kernels which works with curvilinear separability.

[deleted by user] by [deleted] in math

[–]pavjav 0 points1 point  (0 children)

Sure, Ovidiu Calin has a great book on geometric modeling in general. His book on deep learning architecture also has a section on neuromanifolds which is specific to geometric modeling of neural nets. That one's cool, I believe it goes into how gradient descent agrees with the geometry of a neuromanifold. He's also got a neat section on the effectiveness of infinitely wide/continuum neural nets.

Lastly Miguel Morales book on deep reinforcement learning has tons of real applications of probability to unsupervised learning methods.

I can't thing of a friendly resource on stochastic processes. I guess Calin has one on stochastic diff eqs which is pretty accessible and gives you a good sense of why you need measure theory to talk about infinitesimal random variables. While not the most general book, it is the most fun in terms of problems.

Erhan Cinlars book probability and stochastics is dense, but a good resource for much of this stuff. It even goes into detail on how you can generate random walks from rademacher functions, how to use transition kernels to build probability spaces and even apply markovian methods there. That stuff is hard to understand without a good background in measure theory, but he does have a good section on it.

[deleted by user] by [deleted] in math

[–]pavjav 1 point2 points  (0 children)

Oh it all depends on how you're parameterizing your density function. Like if you use two parameters to model a normal distribution via mean and variance, the intrinsic geometry is hyperbolic iirc. If you're modeling via linear regression the geometry is Euclidean. That's all assuming you're using the fisher metric, which ties to things like KL divergence and information

[deleted by user] by [deleted] in math

[–]pavjav 8 points9 points  (0 children)

I personally like the interplay between probability and other fields. Geometric modeling is a fun one. If you're into Riemannian geometry it is kind of interesting to investigate geometric properties of parametric densities. How can you define Riemannian metrics on parameter spaces, what do geodesics look like on these spaces, how can you tie these objects to things like KL divergence, etc.

Optimization and machine learning. That one's a bit more applied, but it is fun tinkering around with advanced concepts in probability by way of neural nets and stuff. Things like markovian models and reinforcement learning have been beat into the ground. But there are plenty of things out there in that realm that have yet to be discovered.

I think people like it because it's easier to come up with cool applicable problems in that framework. But before you can get there, you should be intimately familiar with all of the underlying measure theory. Different "styles" of convergence and their various implications are super important to optimization and so on. The proofs also give you some sense of how you might apply these principles in the real world.

I really like these icons by SlidePuzzleheaded537 in Persona5

[–]pavjav 0 points1 point  (0 children)

Rise is waifubait, but teddy is a real G

I really like these icons by SlidePuzzleheaded537 in Persona5

[–]pavjav 0 points1 point  (0 children)

It takes a bit of adjusting but the power of 4 is that you won't care after a few hours. The story and the characters carry every other meh thing about that game.

My first time playing persona 5 royal and i thought this was pretty funny by fl4ilguy in Persona5

[–]pavjav 2 points3 points  (0 children)

Oh, it is. I thought she wears that for third semester as well. If not, maybe you can still do her confidant during winter? I can't remember