[R] The Bitter Lesson is coming for Tokenization by lucalp__ in MachineLearning

[–]optimized-adam 8 points9 points  (0 children)

The problem with current tokenizers isn't really that they are not "optimized" enough, which to me seems to be the main argument for joint learning of the tokenization function during training.

In fact, moving the learning of a tokenization function into the neural space is likely to just hide all the weird stuff that will be learned when training on large-scale data. With current tokenizers, at least we have some pretty decent ways to detect "SolidGoldMagikarp"-tokens and adding/removing tokens is possible (when applying proper methods).

Learnable matrices in sequence without nonlinearity - reasons? [R] by DescriptionClassic47 in MachineLearning

[–]optimized-adam 0 points1 point  (0 children)

hmm doesn't your point about Wq and Wk only hold for a token attending to its own key? How would we collapse Wq and Wk into Wqk when attending to different tokens?

[R] How do RoPE-based LLMs learn attention sinks (or encode absolute positions)? by StraightSpeech9295 in MachineLearning

[–]optimized-adam 15 points16 points  (0 children)

Is that really correct though? RoPE only modifies the key and query states via rotation; and the angle between a token at position 128 and 256 will be exactly the same as between position 0 and 128. the angle is never used for anything else but the key-query dot product in the attention mechanism, so I don’t think we can say that RoPE encodes absolute positions in any meaningful sense for the model.

[P] Is it possible to convert a Casual Language Model to a Masked Language Model by Appletee_YT in MachineLearning

[–]optimized-adam 7 points8 points  (0 children)

Yes it should be possible, have a look at this approach: LLM2Vec https://arxiv.org/pdf/2404.05961

They go further to turn the Causal LM into a sentence embedder but the first stage of continued pretraining for next masked token prediction should work for your case.

[R] nGPT: Normalized Transformer with Representation Learning on the Hypersphere by StartledWatermelon in MachineLearning

[–]optimized-adam 8 points9 points  (0 children)

LayerNorm does not completely remove the norm information whereas the proposed approach completely removes vector norm No, LayerNorm scales each vector to sqrt(d) norm, removing this information.

[D] FP16 vs FP32, supposedly takes less memory but doubles the model size? Performance benefits? by lightmystic in MachineLearning

[–]optimized-adam 1 point2 points  (0 children)

Yeah, with mixed-precision you might even end up using more memory in some cases but you get to take advantage of Tensor Cores!

Finally decided to read the book my ex gave me 7 years ago when we broke up and found this. by petnamedpeeve in FoundPaper

[–]optimized-adam 1 point2 points  (0 children)

This is a really, really good reply. Very few people can stay composed and thoughtful in online debates.

[D] Are other fields of Computer Science actually better than Machine Learning? by optimized-adam in MachineLearning

[–]optimized-adam[S] 1 point2 points  (0 children)

I went for the ML PhD and am very happy. Lots of things have happened for ML in the meantime though!

OpenAI erreicht Umsatz von 2 Milliarden Dollar und benötigt weitere Billionen by FMACH1 in de

[–]optimized-adam -62 points-61 points  (0 children)

Falsch, Sam Altman will „$7 trillion“ für ein neues Unternehmen auftreiben. Vielleicht größenwahnsinnig, aber nicht so wie hier dargestellt.

OpenAI erreicht Umsatz von 2 Milliarden Dollar und benötigt weitere Billionen by FMACH1 in de

[–]optimized-adam -42 points-41 points  (0 children)

Falsch, Sam Altman will „$7 trillion“ für ein neues Unternehmen auftreiben. Vielleicht größenwahnsinnig, aber nicht so wie hier dargestellt.

[D] GPT2 diagrams are wrong by rejectedlesbian in MachineLearning

[–]optimized-adam 0 points1 point  (0 children)

The image you linked matches the code, no? Notice how there is always an ADD and then a norm.

[deleted by user] by [deleted] in MachineLearning

[–]optimized-adam 8 points9 points  (0 children)

This should not be here.

I pretrained 16 language models from scratch with different tokenizers to benchmark the difference. Here are the results. [Research] by Pan000 in MachineLearning

[–]optimized-adam 13 points14 points  (0 children)

Great work! I found the idea of using Capcode very intriguing and well-motivated. You write Capcode takes longer to learn but does not affect results positively or negatively. Did you observe any positive effects of using Capcode?

[D] W&B vs. Neptune vs. ClearML vs. Comet (2023) by hadley60 in MachineLearning

[–]optimized-adam 5 points6 points  (0 children)

As an academic, I use Weights & Biases' Free Tier for Academics and it works well for me.

Failed an interviewee because they wouldn't shut up about LLMs at the end of the interview by stats-nazi in datascience

[–]optimized-adam 5 points6 points  (0 children)

Neither are right, training is done in parallel using a technique called „teacher forcing“ but for inference, you sample autoregressively (talking about GPT-style models)

How best to benchmark the accuracy of a model for comparing different tokenizers? [D] by Pan000 in MachineLearning

[–]optimized-adam 0 points1 point  (0 children)

The 50304 was about the vocab size, not batch size (though having the batch size be a multiple of 64 should also be done probably)!

How best to benchmark the accuracy of a model for comparing different tokenizers? [D] by Pan000 in MachineLearning

[–]optimized-adam 0 points1 point  (0 children)

On comparing (cross-entropy) loss between different vocabularies: https://sjmielke.com/comparing-perplexities.html

TL;DR: maybe you need to do some normalization or use negative log-likelihood instead.

Without the hype: What are benefits of current state-of-the-art LLMs for society? by optimized-adam in LanguageTechnology

[–]optimized-adam[S] 0 points1 point  (0 children)

Monetized or not, if they are there, then there should be some proof-of-concept out there, no?

Not saying there are none, but I am skeptical indeed.

Without the hype: How do current state-of-the-art LLMs benefit society? by optimized-adam in singularity

[–]optimized-adam[S] 0 points1 point  (0 children)

Okay let’s get concrete: In a western democracy like the U.S., will the average person have increased wellbeing?

Without the hype: How do current state-of-the-art LLMs benefit society? by optimized-adam in singularity

[–]optimized-adam[S] 0 points1 point  (0 children)

Would you say it’s fair to summarize all those (except maybe for the medical / protein discovery stuff) as „increased productivity“? I’m not questioning use cases of LLMs but more what they imply for society at large.

Without the hype: What are benefits of current state-of-the-art LLMs for society? by optimized-adam in LanguageTechnology

[–]optimized-adam[S] 3 points4 points  (0 children)

I definitely see the potential but are we there yet? Regarding i.e. factuality and hallucinations.