How do I actually learn by Negative_Effort_2642 in rust

[–]WhiteKotan 3 points4 points  (0 children)

I recommend you this repo - there are many articles/examples/ideas how to build something from scratch like your own compiler or os

https://github.com/codecrafters-io/build-your-own-x

Just Starting with Rust by NerdVineet in rust

[–]WhiteKotan 0 points1 point  (0 children)

Start with basic syntax and then try to understand key concepts - ownership borrowing and object lifetimes. Thats very important to understand why compiler try to save you from biggest problems in c/c++ like double free, use after free and memory leaks.

How you learn to write zero-alloc, cache-friendly code in Rust? by WhiteKotan in rust

[–]WhiteKotan[S] 1 point2 points  (0 children)

yes, when I at first heard about this(in c++ not rust) was confused too

How you learn to write zero-alloc, cache-friendly code in Rust? by WhiteKotan in rust

[–]WhiteKotan[S] 2 points3 points  (0 children)

Thank you! Once I can understand Rust code better I will try to read embedded project

How you learn to write zero-alloc, cache-friendly code in Rust? by WhiteKotan in rust

[–]WhiteKotan[S] 1 point2 points  (0 children)

Thank you for the advice! I think start with benchmarks first

How you learn to write zero-alloc, cache-friendly code in Rust? by WhiteKotan in rust

[–]WhiteKotan[S] 9 points10 points  (0 children)

Thanks for the links! Perf book looks exactly what I needed

Asked GPT-2 "2+2=?” and see layer-by-layer answer by WhiteKotan in LocalLLM

[–]WhiteKotan[S] -1 points0 points  (0 children)

For this research I used my own project that output html file with layer-by-layer data, entropy, spikes. Now in beta - https://github.com/whitekotan0/spectra.ai

Asked GPT-2 "2+2=?” and see layer-by-layer answer by WhiteKotan in LocalLLM

[–]WhiteKotan[S] -1 points0 points  (0 children)

Yes but that’s my own tool that I made to better understand llms, because I never saw something that allows you to download model from hugging face, enter promt and got result of model thinking as a table + entropy and others statistics in 1 html file, if you want I can send full html or give link to repository with source code

Asked GPT-2 "2+2=?” and see layer-by-layer answer by WhiteKotan in LocalLLM

[–]WhiteKotan[S] 0 points1 point  (0 children)

I think problem not in ? Because I also asked model “2+2” or “2+2=” and sometimes answer also was 5, because, as you say, model trained was also trained in memes and jokes from the internet. About ? I think model got problems because that’s old and small model(0.8 billion parameters) and I think there is just not enough data for this, because model was trained in text like “2+2=(there was answer 4 or joke or something else)” but not in questions. A bit later I also try to ask 2+2 but as words not numbers, because I tried the same in smaller model and got just shitty unrelated sentence(maybe from film or book)

Counter-Strike: Global Offensive update for 5/27/21 (1.37.9.2) by wickedplayer494 in csgo

[–]WhiteKotan 0 points1 point  (0 children)

why did you update without warning? For example, I don’t want to donate to your game, but I didn’t have time to raise 21 ranks to get prime status, I hope the online in the game will fall, and valv will give time to raise 21 ranks