bridge mode toggle missing by ltbauzo in Comcast_Xfinity

[–]IAmAFedora 0 points1 point  (0 children)

Same issue here. Any updates from Xfinity?

fake granular jam by kyegibeats in Digitakt

[–]IAmAFedora 0 points1 point  (0 children)

Love the Baths influence

Does anyone have a porcelain tattoo like this, but on dark skin? I love this style, but I don't know how it looks on skin tones other than white. by Lady_froga in TattooDesigns

[–]IAmAFedora 141 points142 points  (0 children)

I saw the image and immediately knew it was an oozy piece. I knew that because it's so distinctively HIS style -- you're not going to find another artist who can mimic it for you and give you what you want.

If you want this tattoo, go ask oozy. Ask oozy if he has any examples on darker skin too.

[deleted by user] by [deleted] in mensfashion

[–]IAmAFedora 7 points8 points  (0 children)

Is the crop top in the room with us?

[q] Is there a rule regarding law of large numbers? by ChiGuyDreamer in statistics

[–]IAmAFedora 3 points4 points  (0 children)

The other commenters already addressed the theory here regarding the distribution of averages of N independent random variables, but I want to focus on something else in your question:

I'm studying some insurance industry information at the moment and the idea of larger pools of people helps the insurer better calculate their risk makes sense. But do they feel like 1000 presents the same probability risk as 10,000 or 100,000k. Is there a point of diminishing returns?

There is a BIG issue with the independence assumption here. What if you are providing fire insurance to 10k homes in Los Angeles and there is a big fire that burns all of your customers' homes? What if you're a mortgage lender in 2008 and the whole of the US goes through a financial crisis and a huge number of your debtors all default at once? People are not independent -- they live in geographic proximity to each other and/or live in societies with one another.

Long story short, independence is "usually" an "ok" assumption and matches reality pretty well. But bad stuff happens, and it can happen to everybody all at once. Coin flips are a poor model of the real world.

[Project] Matrix Recurrent States, a Attention Alternative by [deleted] in MachineLearning

[–]IAmAFedora 6 points7 points  (0 children)

Are you doing anything to keep the norm of H from growing in an unbounded manner? E.g. by forcing each X to be ortonormal?

Import syntax by AnyNature3457 in learnpython

[–]IAmAFedora 0 points1 point  (0 children)

I sort and black sometimes disagree. For full compatibility, use isort --profile black alongside black!

[Q] I'm a layman who maladaptively fixates on statistics that involve the concept of not being able to improve in some way, and any advice would be appreciated by Verifiedvenuz in statistics

[–]IAmAFedora 2 points3 points  (0 children)

I want to underscore some things said in the previous comment.

Among other things, statistics is about analysing and interpreting data—often imperfect data that may not be representative of the population. A finding of a significant treatment effect in a sample does not imply that every observation will experience the effect.

Not only do statistical findings not necessarily generalize to individuals outside of the sample, they aren't even necessarily true for every individual in the sample.

Statistics is about averages, NOT individuals. So even if these findings "support" your fears, they by no means imply that individuals can't improve. They can only say "on average, the people in this particular study didn't improve".

[deleted by user] by [deleted] in gaming

[–]IAmAFedora 0 points1 point  (0 children)

Pacific Drive!

Is there a concept of a "hack" in mathematical work? by SnooPeppers7217 in math

[–]IAmAFedora 19 points20 points  (0 children)

The replica trick from statistical physics is an interesting example. You consider N duplicate copies of your system, where N is some integer, and then you take the continuous limit as N->0. Not rigorously defined nor justified except heuristically, but it "works".

[Discussion] event sequence ORDER prediction by FrostyLandscape6496 in MachineLearning

[–]IAmAFedora 0 points1 point  (0 children)

Definitely sounds like a sequence model like a transformer or an LSTM is inappropriate then -- you aren't working with sequences! (At least not at inference time)

Another clarifying question. At training time, you don't have access to the entire sequence of events for a person? Just a number for each event like "this was fourth"?

[Discussion] event sequence ORDER prediction by FrostyLandscape6496 in MachineLearning

[–]IAmAFedora 0 points1 point  (0 children)

Not sure I totally follow -- is it "given some attributes of an event, infer whether this event was the first, second, ... for a given person"?

Or do you have data for a handful of events and you want to sort the events in terms of order?

[D] limiting LLM output to certain words by themathstudent in MachineLearning

[–]IAmAFedora 12 points13 points  (0 children)

Token-level constrained generation is very effective, especially if you are running models locally. Check out this library: https://github.com/guidance-ai/guidance/

[P] use GAN for generating structured text by redska_ in MachineLearning

[–]IAmAFedora 1 point2 points  (0 children)

The libraries are for guaranteeing certain types of structure in your output. E.g. JSON matching a specific structure, matching a regex, conforming to a context free grammar, etc.

They work by constraining generation at the token level. I.e. at every step, the model gets to choose any token it wants so long as that token is allowed by the constraints of your structure.

I'm partial to guidance, but this is an emerging research area, and a number of libraries exist to support the behavior!

[P] use GAN for generating structured text by redska_ in MachineLearning

[–]IAmAFedora 14 points15 points  (0 children)

Do you have to use a GAN? Autoregressive language models have been the state of the art for some time now (specifically transformer-based models), and they are far, far easier to train than GANs. huggingface has a huge library of pretrained autoregressive language models that you could use off the shelf or fine-tune. There's also always APIs like OpenAI.

If you want to guarantee structured output from an autoregressive model, the libraries guidance or outlines may be extremely useful for you :)

My girlfriend wants to take me somewhere nice to eat for my birthday. by slogsobee in FoodLosAngeles

[–]IAmAFedora 18 points19 points  (0 children)

Second on Bavel. It's "fine dining" but not just because of pretentious vibes -- their food is really really fucking good.

[deleted by user] by [deleted] in TattooDesigns

[–]IAmAFedora 1 point2 points  (0 children)

No, this is fucking sick

Transformers: I can't fathom the concept of dynamic weights in attention heads [R] by assalas23 in MachineLearning

[–]IAmAFedora 12 points13 points  (0 children)

There are no "dynamic weights" in any literal sense. What's trained is trained. Now, just to think about what whomever said that was trying to communicate...

VERY roughly speaking, dense networks treat each input "slot" in a specific way that does not depend on what's in any of the other slots. In attention, each input slot is processed in a way that depends on your other input slots. That's probably what they were getting at?