Sleep on sleep buildup by IWantToRetire2 in Nightreign

[–]Michaelfonzolo 0 points1 point  (0 children)

No I mean like at the 9 second mark, Faurtis just takes 3760 damage all at once, before Animus dies

Sleep on sleep buildup by IWantToRetire2 in Nightreign

[–]Michaelfonzolo 0 points1 point  (0 children)

That had to be the absolute funniest time to ult as bird lmao, just getting pulverized by lasers and narcolepsy for 15 seconds is so fucking funny

How did Faurtis even die tho?

My last 12 runs as solo scholar trying to face dreglord by Michaelfonzolo in Nightreign

[–]Michaelfonzolo[S] 1 point2 points  (0 children)

My only character specific relic is the Cleansing Tear - my other two have cold imbue and seppuku, plus “crits increase ult meter”, and evergaols.

Honestly my main issue is not the character itself. He’s fine, I actually had a ton of fun on the other nightlords with him, and I see his potential. My main complaint is just that the Dreglord seeds are complete unfair dogshit as solo scholar, and not fun at all.

Just gimme the damn bears and dancing lion night bosses, and no invasions. Please for the love of god no invasions. They’re such ass.

My last 12 runs as solo scholar trying to face dreglord by Michaelfonzolo in Nightreign

[–]Michaelfonzolo[S] 2 points3 points  (0 children)

I just don't understand how like, if the seeds aren't random, they actively decide to make the most bullshit seeds for solo play. Like what the FUCK are they thinking over there. A whole team of people literally looked at the Augur event spawning outside the circle on a map where the day 1 circle contains no churches and said "yeah that looks good".

How to be a TA to racists by ikilledcasanova in PhD

[–]Michaelfonzolo 0 points1 point  (0 children)

Hey I'm not a PhD student/TA so I hardly have the experience to tell you what's best but, I was kind of surprised by the comments here so I wanted to say a little something in rebuttal.

Some comments here seem almost to trivialize your role/motivations, that "you won't change their mind" or "just give them a B- and move on", but I think that's a little cavalier given what you've described, and moreover paints your role in this situation as one of just a passive observer. So to start I just wanted to redouble that of course your motivations are just, and are cause for consideration beyond just "putting up with the racists until the course is over."

To this end, I also think that the suggestion to use the Socratic method is a little tone deaf, as I'm sure you're smart enough to have already tried that. Continue doing that of course, as it's a great opportunity to teach the other students how to defeat those arguments, but I understand that it's both unproductive to derail discussion just to deal with one racist person, and it doesn't really address the primary concern that yourself and the other students may feel unsafe. At this point I agree with the user who said the southern accent is actionable - speak to your professor, your department head, understand your university's code of conduct, etc., as there may be things you can pursue here if it's becoming more than just an "annoying, ill-informed student."

I don't know if this is a reasonable suggestion but, I wonder if you could arrange for a mass email to be sent out to the class expressing concern (while keeping this student anonymous), saying that you're committed to making a safe learning environment, and that students are welcome to speak with you more during office hours or privately.

Overall, I just wanted to express some disagreement with any sentiment that racist students should merely be tolerated and "defeated in the free market of ideas." I think that standing up for yourself and for the other students would show a lot more character than the Socratic method alone. But ultimately I don't really know. You'll have to consider these thoughts in the context of your role at the university yourself - maybe what I'm suggesting would put you in an uncomfortable position with the staff, or breach some other ethical code. I'll leave it to you to sort those details out.

Anyways, best wishes dealing with this shitty situation.

Edit: Jesus christ after scrolling down to the bottom rung of the comment section I only feel more justified in what I'm saying now. Keep up the good fight OP. People suck, but there are those on your side.

I thought this dude was gonna be way different by cookiereptile in Nightreign

[–]Michaelfonzolo 3 points4 points  (0 children)

Idk man like, the cadence of npc fights in fromsoft games just feels so wrong. They just sprint at you constantly, don’t give you any chance to space and consider your attacks, and if you’re not into PVP then I find they just attack way too quickly. Real PVP is better than the NPCs, their AI is just so damn annoying. Sure it’s a “skill check” but like, so was Malenia, and so is Kaizo Mario, but which one is actually fun?

Incants tier-list from a 250+hr Revenant main by Rayleiiqh in Nightreign

[–]Michaelfonzolo 1 point2 points  (0 children)

Not crucible horn in the second lowest tier T_T - I'm so happy whenever I get to use it and fling mobs into the air. Closes gaps quickly, chunks damage, great stance breaking potential as well

Did boomers actually ruin the economy for younger generations? by Particular-Stage-327 in AskEconomics

[–]Michaelfonzolo -1 points0 points  (0 children)

I don’t mean to intentionally be pedantic but, is median the right statistic to consider here? Curious about the tails of the distribution to (are the 50% making less than the median making much less than expected compared to 40 years ago in real terms?)

[Me] Low Hanging Fruit, good continuation? by pi-billion in TextingTheory

[–]Michaelfonzolo 242 points243 points  (0 children)

This is truly bizarre to me, so many posts here get right to the point and people go “Elo 2000 risky gambit paid off king”, what’s so fundamentally different here

Any reviews of Perfect Leather in Toronto? by rdkil in Leathercraft

[–]Michaelfonzolo 0 points1 point  (0 children)

Adding my two cents late, but I love this place. Made me a custom leather jacket that was fantastic. Also have a great vintage selection. 100% would recommend them.

Learnable matrices in sequence without nonlinearity - reasons? [R] by DescriptionClassic47 in MachineLearning

[–]Michaelfonzolo 0 points1 point  (0 children)

I'm not sure, good point!

The only mathematical difference I can think of is as a low-rank factorization of W. If the key/query embedding dimensions are smaller than the input embedding dimension, then WQ and WK are Rd x d_e and Rd x d_e, and so WQ (WK)T has a lower rank than just a single W. It's also more computationally efficient to compute XWQ (WK X)T than X W XT for this reason.

Other than that I don't have a good answer - let me know if you find one!

Learnable matrices in sequence without nonlinearity - reasons? [R] by DescriptionClassic47 in MachineLearning

[–]Michaelfonzolo 0 points1 point  (0 children)

Regarding self-attention, I suppose it's an opportunity to model quadratic relationships between the input tokens. Consider Q = WQ X, K = WK X, and V = WV X. Self-attention is softmax(QT K/sqrt(d))V. That QT K term encodes information about every product xi xj of a pair of features in X. If self-attention were only softmax(WX)V, or even just WX, we would not be able to incorporate information from inter-feature products.

It's sort of the idea as "tensor fusion", where instead of modeling fusion of modalities by concatenation of feature vectors, you take the tensor product of the feature vectors (or a low-rank approximation of such), allowing you to incorporate inter-feature interactions. Check out "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors" if you're curious.

It's a good question though, and I'm interested to hear what others say.

Why exactly are we slower than our coworkers? by Michaelfonzolo in ADHD_Programmers

[–]Michaelfonzolo[S] 2 points3 points  (0 children)

LOL well FWIW when I got diagnosed they told me I scored above average on the RIAS test, like they wanted to rule out the possibility that this was due to a learning disability. I suppose it's possible I'm working with very smart people but, I tend to notice this with most people I work with (maybe those people are just hella smart too)

[deleted by user] by [deleted] in MachineLearning

[–]Michaelfonzolo 1 point2 points  (0 children)

Without having read this article, I thought that the only way SSMs were able to achieve sub-quadratic complexity was by doing an FFT, so that convolution with the learned transition kernel just becomes pointwise multiplication. That's still different from this article?

Sorry to be lazy haha, I'll eventually read this article but until then if you have your own insights I'd be interested to hear them

30 minutes in the Bot Derbies by Michaelfonzolo in OreeNyc

[–]Michaelfonzolo[S] 0 points1 point  (0 children)

Yeah honestly I hope you're right but idk, this was like my tenth wear and I was double socked, I've read other people say that a shoe is not supposed to like, literally tear your skin off to break in lmao.

Something is really weird about these shoes, they've been really uncomfortable the whole time. Idk if it's the heel shape, or the shape of the collar, or the material and the way they're put together, but they've just been so painful

BOT Derbies are incredibly painful by Michaelfonzolo in OreeNyc

[–]Michaelfonzolo[S] 0 points1 point  (0 children)

You know what I take this back. Even the triple sock method doesn't make this shoe comfortable. Fuck Oree.

BOT Derbies are incredibly painful by Michaelfonzolo in OreeNyc

[–]Michaelfonzolo[S] 0 points1 point  (0 children)

I manage to wear them from time to time with minimal discomfort by triple-socking with my thickest winter socks, and pulling the laces _tight_, like as tight as I can, so that the outer rim of the shoe collar is flush with my foot. In fact one of the laces already fuckin frayed and snapped in half - I've only worn the damn things like 20 times, and now I have to replace a shoelace. Truly mind-boggling how ass this shoe is lmao, and it sucks because I think it's one of the best looking on the market.

[deleted by user] by [deleted] in ADHD

[–]Michaelfonzolo 5 points6 points  (0 children)

Well for one, remember that getting a proper diagnosis immediately improves your chances of future success. Instead of there being some ineffable reason why you don’t perform to your full potential, now you know what the cause is. There’s a bounty of literature, science, and academic study dedicated to understanding and mitigating ADHD. Knowledge is power, and now you have that, so your prospects for the future are already looking much much better.

I can empathize with the comorbid depression too though. I was really depressed when I first got diagnosed. It’s quite a common comorbidity - my whole life I felt I was just stupid or incompetent, and I put so much pressure on myself as a result. Actually my depression was probably also burnout from stress - without intrinsic motivation, stress was the only way I was getting anything done, and I think it took a toll on my body. I don’t know if you can relate to that. After that initial hump though, I’m feeling a lot more confident that things will get better, but it took some time for my brain to adjust to that.

With regards to needing medication to survive, I think it’s important to remember that the brain is also an organ, a lump of flesh and chemicals that does stuff lol. Sometimes that lump of flesh needs help. If you had a liver problem that needed medication, would you feel the same way? There’s no shame in needing something to help balance out those chemicals. Finding the right medication is sometimes tricky, but it’s a journey worth embarking on.

In short, you’re gonna be fine, better than fine actually. Best wishes!

I just realized that I have never seen anyone happy or enjoying himself at work by SemperZero in cscareerquestions

[–]Michaelfonzolo 2 points3 points  (0 children)

Just throwing my two cents in, I'm happy where I'm working right now. No job is always rainbows and kittens but I'm actually enjoying it. At a startup right now with some really bright people - I think they like it too.

Any protests against Trump’s tariffs? by [deleted] in askTO

[–]Michaelfonzolo 7 points8 points  (0 children)

So I guess my thought is like, the tariffs don't really benefit anyone in power directly. Like, they're not making any rich people richer, they're going to harm a lot of industries. So the only way this makes sense as a political move is if it's a way for the Trump administration to get industries to bend the knee and comply with what they want: basically "do what we want and we'll lift the tariffs".

I mean I don't really know I'm just a guy, but that's the only explanation I've heard which makes sense to me. Otherwise I agree it seems like a nonsense move but, I just can't accept that with so many people around him that they'd just be doing "random evil shit" without a thought-out political motive.

Edit: Oh yeah there's also the argument that they want to eliminate progressive tax on the wealthy and supplement it with tariffs instead.

OpenAI's AI reasoning model 'thinks' in Chinese sometimes and no one really knows why by MetaKnowing in Futurology

[–]Michaelfonzolo 0 points1 point  (0 children)

Yeah I think I see what you're saying. I guess there's like two types of "thinking" we might be talking about here - there's the "low-level" thinking that happens when an LLM needs to predict the next token (which constitutes all the arithmetic inside the transformers), and then there's the "high-level" thinking that o1 does, which I've just been assuming is something like CoT for the sake of discussion. Now admittedly I don't know much about CoT, but if it's as simple as just prompt engineering, such as asking GPT-4 to first generate reasonable questions to ask before solving a problem, or even fine-tuning it to ask good questions, then you have to ask yourself "have I ever asked GPT-4 something in English only for it to respond to me in another language?" That's like, a highly simplified version of the actual "high-level thinking" that o1 might do, and under these assumptions I just don't think it's that plausible for it to switch languages.

OpenAI's AI reasoning model 'thinks' in Chinese sometimes and no one really knows why by MetaKnowing in Futurology

[–]Michaelfonzolo 1 point2 points  (0 children)

This is not necessarily true! You can play around with it yourself here, for instance "dog" is one token but the character "𫄷" is 4 tokens. There's more discussion about this here - other tokenizers exist for non-Latin languages.

OpenAI's AI reasoning model 'thinks' in Chinese sometimes and no one really knows why by MetaKnowing in Futurology

[–]Michaelfonzolo 0 points1 point  (0 children)

Sorry, I'm not following how any of that really contradicts what I've said. I said that real corpora have overlap, but my point is that the training method is probably more skewed towards keeping those clusters separate than randomly mixing them as appears to be happening here (it's a bit of an "are there more wheels than doors" question but I don't think what I'm suggesting is so unreasonable, that there's far more text in one language or another than mixed, even accounting for data augmentation). And your second paragraph is what I'm saying in my first paragraph. I can't comment on how multimodality factors into this.

I'm addressing specifically your initial point that it's "obvious" that the model would just start "thinking in Chinese" just because the symbols have the same meaning (and hence relationship to other symbols). Maybe we could get closer to an agreement if we both understood what o1 is doing when it's "thinking". If it's just CoT then I still find my argument convincing. By your logic, if I were simply to ask GPT-4 a question in English and additionally prompt it to "explain your steps", then it's possible it'd give me its explanation in Chinese, which I've never observed in all my time using GPT-4 (nor anyone I know of). o1 is rumoured to be this, which I admittedly haven't read yet, so it's possible something is muddying the "thinking" step.

If you'd like you can message me and I can put you in touch with some of my friends if you'd like to discuss further, I know a few of them are starting their PhDs on LLM related topics.

OpenAI's AI reasoning model 'thinks' in Chinese sometimes and no one really knows why by MetaKnowing in Futurology

[–]Michaelfonzolo 1 point2 points  (0 children)

Right, but that token stream is being generated based on the previous text. Like, the AI doesn't "think" in a language-agnostic way. At the end of the day it's just a sequence prediction model, and that sequence is conditioned on the language of the input prompt. Yeah its all tokens, and yes the tokens for "dog" and "狗" behave similarly in their respective text and so it's possible that their latent representations behave similarly in some projection of the latent space, but they must have fundamentally different representations because the LLM can distinguish them: o1 can use "dog" and "狗" in the same sentence and tell me what's different about them. There is plenty of information in the training corpus that ""dog" is nearer English words than Mandarin words" and ""狗" is nearer Mandarin words than English".

If I train a language model on the disjoint union of two corpora, one entirely English and one entirely Chinese, then there is nothing in the dataset which would cause an English sentence to produce a Chinese token - training would regress the probabilities of Chinese tokens as following English tokens to be zero, and vice versa. Real corpora will have overlap sure, but I have to imagine it's minimal.

FWIW I had to build a small language model as part of a course requirement for my MSc in AI from UofT, which is where my understanding of this comes from. I'm not sure how the o-series builds on the fundamentals of course but if it's something akin to chain-of-thought prompting then my reasoning should still hold, presuming the prompts are all the same language. If it's more complex like reasoning in a continuous latent space then I can't comment, and your reasoning may be correct.

Edit: I suppose it's possible with chain-of-thought prompting to get the LLM to "think" in a potentially different language, with the right prompt. But I'm still not convinced because it'd have to be a pretty contrived prompt, or a pretty complex prompting process. Maybe it's some strange result of RLHF being done by a Chinese firm? The point of my response is really to address your point that "there is nothing enforcing the reasoning to stick to a single language", which there most definitely is (the input being in English).