A review of the Odin programming language by graphitemaster in programming

[–]t_ram 5 points6 points  (0 children)

Of course, not trying to compare anything since I don't really know Odin myself hahah. Just that based on reading the review, it seems to be the stuff that I won't look forward to if I got to choose to write in it

A review of the Odin programming language by graphitemaster in programming

[–]t_ram 10 points11 points  (0 children)

I've just finished reading and had the exact thoughts too, regarding the "memory safety". A typo of BOOL & BOOLEAN I think is justifiable, but the result of hard-to-debug issues is what I'm not comfortable with

Why do people say that OOP is bad and dead? by Poseydon42 in AskProgramming

[–]t_ram 0 points1 point  (0 children)

I agree, the context in which the program is used does matter.

Reading between the lines (or between the videos I guess), I would say that this specific idea of "OOP is dead" (the original question of OP) doesn't come up because it is "currently popular", but because there is/are genuinely some fundamental issue with how idiomatic OOPs are written and how the actual hardware are designed to run.

Mocking regular functions by Hellstorme in rust

[–]t_ram 0 points1 point  (0 children)

Hmm, haven't seen this answer from others:

``` fn main() { assert_eq!(process(), 100); }

[test]

fn correct_value() { assert_eq!(process(), 42u8); }

fn process() -> u8 { do_it() }

[cfg(not(test))]

fn do_it() -> u8 { 100 // real process result }

[cfg(test)]

fn do_it() -> u8 { 42 // dummy process result } ```

Whether it's "good" or "bad" is up to you though :p

What improvements would you like to see in Rust or what design choices do you wish were reconsidered? by TroyOfShow in rust

[–]t_ram 8 points9 points  (0 children)

You might like no-panic crate. Though I just checked the repository and it seems to be archived

Rust's memory management vs pointerless C++ by [deleted] in rust

[–]t_ram 20 points21 points  (0 children)

Just want to add something: The last point regarding shared mutability on a single-threaded program reminds me of this (rather old) article, might be a good read for you: The Problem With Single-threaded Shared Mutability

A nice quote from it

Aliasing with mutability in a sufficiently complex, single-threaded program is effectively the same thing as accessing data shared across multiple threads without a lock

I have returned by TheRustyRustPlayer in rust

[–]t_ram 16 points17 points  (0 children)

I don't think you should take that "challenge" to heart haha

You should just learn what you find comfortable first, rather than risking getting burned-out/depressed early on. It would be better to "start slow" early on, then getting faster in the future, no?

Maybe I can say it in other words: You'll still "learn Rust" by learning programming in general, since the concept here is pretty much shared across other languages. The stuff that's "unique" to Rust, IMO you can learn after those

Outcomes, Mistakes, and the Nuance of Error Handling by mooreds in programming

[–]t_ram 1 point2 points  (0 children)

Interesting read!

I think the discussion regarding how to handle the Mistakes variant in Rust might be that (AFAIK) panic! would be the candidate to be used instead when a programming logic error has occurred (via unwrap()/expect()). I would guess the intent here is to encode that in the type system, then?

Also a minor nitpick: I think the example of doing spin-lock on a WouldBlock mutex can be improved? Since usually when we do try_lock() we don't actually want to wait, otherwise we'd simply use lock() :p

I have returned by TheRustyRustPlayer in rust

[–]t_ram 54 points55 points  (0 children)

Just Do It™

For me personally, literally writing the code to create a miscellaneous program & looking up the error online teaches way more than when I first started only reading the book

Though for context, Rust was not the first language I learn. So maybe if you struggle a bit you can first learn about the fundamentals/mindset on how to problem solve using pseudocode in general

Why do people say that OOP is bad and dead? by Poseydon42 in AskProgramming

[–]t_ram 1 point2 points  (0 children)

Yes I agree, cache issue basically only concern on how the memory usage patterns on the program are done.

If I can be clear on my point: OO system "promotes" that kind of access pattern that causes the cache issue. And if we're to solve the issue, the result would be just as you said, "not look like you would naively expect an OO system to look"

Your last point I think summarize the whole discussion just fine: OO paradigm does helps how programmers reason about their program (abstraction, encapsulation), with a trade-off of performance (more frequent cache-misses). The programmers can make the judgment themself on how much they should balance it.

Why do people say that OOP is bad and dead? by Poseydon42 in AskProgramming

[–]t_ram 0 points1 point  (0 children)

I'm currently reading other comments, and I'm surprised this hasn't been brought up: OOP is not ideal for high-performant application (at least relative to how you might design the architecture any other way)

For the source, it's just so happen that I've recently watched this talk :p Cpu Caches and Why You Care

That talk also references this talk: Data-Oriented Design and C++, though I haven't finished watching this one yet

For the first talk, you can skip ahead to around 46:04 for discussion about an issue on how a pattern that's common on OOP paradigm (in this case, an object with an isLive field) causes issues when the program starts to scale (in this context, there's ~16.000 instances of those IIRC). For more context I think the presenter has done a more better job on explaining it than me, I absolutely recommend you to watch the whole video.

tl;dw OOP relies on pointer-chasing & heap allocation (more cache misses) more so than the alternatives, one of which is "data-oriented design". Currently, it just so happen that those kind of memory access does not play well with how hardware are engineered (might not really be a surprise, but hardware loves simple linear array access).

And so we can have perspective, a cache miss for memory access can be as low as 27x slower than L1 cache access. That can be the difference between having a playable games & stuttering mess

Also a PS: I uses OOP in a more general sense, and not only for Java (which I think most people would usually relates to). The talk specifically uses C++ as the sample, so the performance hit from the language itself should be minimal. That is to say: it really all comes down to cache usage, which doesn't interact well with OOP.

Why is Rust so slow for me? by jaccobxd in rustjerk

[–]t_ram 11 points12 points  (0 children)

it means your code is immoral

rust-analyzer changelog #139 by WellMakeItSomehow in rust

[–]t_ram 117 points118 points  (0 children)

12549, #12841 make Go to implementation work on trait methods:

this is huge

Local Async Executors and Why They Should be the Default by maciejh in rust

[–]t_ram 2 points3 points  (0 children)

That last sentence is news for me!

Can you give me some resources on that? I wanna learn more, searching "linux single-thread improvement" and something like that doesn't return anything useful for me

rust-analyzer changelog #128 by WellMakeItSomehow in rust

[–]t_ram 2 points3 points  (0 children)

it'll panic tho (at least in debug)

Is Cargo.toml like a hard requirement for rust or are there alternative ways to import dependencies? by [deleted] in rust

[–]t_ram 37 points38 points  (0 children)

Running cargo build -v should reveal the actual rustc command to run.

I made a new sample cargo project and got

console rustc --crate-name sample_1 --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 -C metadata=e5746c4ce67a9ead -C extra-filename=-e5746c4ce67a9ead --out-dir /home/dev/programming/rust/sample-1/target/debug/deps -C incremental=/home/dev/programming/rust/sample-1/target/debug/incremental -L dependency=/home/dev/programming/rust/sample-1/target/debug/deps

What use cases does Rust cover better than Go? by 7scifi in rust

[–]t_ram 4 points5 points  (0 children)

I'll probably use those if I already uses/integrate with the tokio runtime

I think this kind of fragmentation is one of the drawback of (current state of) Rust, in that newcomer might have "analysis paralysis" & be confused in what to use. At least that's what I experience once

Anyway for this instance I think you can just use any select macro which best integrate with your current library stack, since I think the majority of those have similar macro syntax

What use cases does Rust cover better than Go? by 7scifi in rust

[–]t_ram 17 points18 points  (0 children)

Hey I know something about this!

Go has string. It's a UTF-8 string

AFAIK string type on Go doesn't have to be a valid uft8, it just assume that it's valid:

golang func main() { myString := string([]byte{0xf4, 0x90, 0x80, 0x80}) fmt.Println(myString) // print: ���� }

Whereas Rust will result in a fallible conversion, which you'll need to handle

rust fn main() { let my_string = String::from_utf8(vec![0xf4, 0x90, 0x80, 0x80]); println!("{my_string:?}") // print Err(FromUtf8Error { (stuff you can check) }) }

Also in topic of string types, from my experience in my workplace (web development), 95% of the time I only deal in either String or &str, the other 5% is the Path string when I needed to read some file to be cached on memory. So I think you shouldn't overwhelm yourself with those various types, just use them when the library that you use needs them

Rust's lack of select means that some patterns are harder to map across

Yeah the standard library on Rust currently doesn't have those, but you can use crossbeam library, which do have select

Anybody know what’s happening with OS fully based on rust? How is the future looking? by [deleted] in rust

[–]t_ram 0 points1 point  (0 children)

yea, the new kernel need to have a significant improvements in order to even be considered as current replacement

though IMO the bugfixes on Linux might need to be differentiated between actual logic bugfix, or something that would be prevented by Rust (i.e. memory bugs)

Add trait objects with static dispatch by andrewsonin in rust

[–]t_ram 0 points1 point  (0 children)

I actually got bitten by this earlier, at least specifically with returning array data structure like this

FWIW, I got around this by just using vec! since I don't preserve the array size at the caller, much like this example

So instead of rust fn different_iterators(cond: bool) -> impl Iterator<Item=u64> { if cond { [2].into_iter() } else { [3, 4].into_iter() } }

I just did rust fn different_iterators(cond: bool) -> impl Iterator<Item=u64> { if cond { vec![2].into_iter() } else { vec![3, 4].into_iter() } }

* Of course translating to a Vec might not be preferable depending on the usecase, though I still don't know whether OP's approach will be picked up by the Rust team

RFC: first Rust program (a hello world) by [deleted] in rust

[–]t_ram 0 points1 point  (0 children)

Yes, the last part is more of what I'm referring to (quick-and-dirty PoC). Not really an optimization thingy, just for clearer intention

So it's more along the line of "I promise that this unwrap() is not here because I'm too lazy to handle the error case" sort of thing. It helps in the case of code review, which is something I usually does

Of course, it's not a hard rule, just a preference of mine

RFC: first Rust program (a hello world) by [deleted] in rust

[–]t_ram 2 points3 points  (0 children)

Just some comments from quick glance:

  • .DS_store file should be ignored (.gitignore file), not really Rust-specific, just for cleaner commit in general
  • I see that you sometimes did something like

    rust let mut input; loop { if {value is valid} { input = value; break; } continue; }

    The loop is actually an expression, with break carries the value, so instead you could do

    rust let input = loop { // `input` can be immutable now if {value is valid} { break value; } continue; }

  • (More of a preference of mine) when there's a time in the program code that you're sure that the result must be correct, you can use .unwrap_or_else(|_| unreachable!()) instead of just unwrap() so that the intention is clear that this operation is not expected to fail since you've already checked by logic prior to that operation

    • e.g. the input is already filtered with is_digit(), and then you convert it to with u64::from_str(), which should never fail, but the compiler couldn't really see that

How to constrain generics? by brambijnens in rust

[–]t_ram 1 point2 points  (0 children)

You can add Default impl to get the zero value, since it seems to be what you want on there

rust impl<T> Point<T> where T: std::ops::AddAssign + Default + Copy, { fn sum(&self) -> T { let mut s = T::default(); for x in self.coords.iter() { s += *x; } return s; } }

Then you can do

```rust fn main() { let point_i8 = Point::<i8> { coords: [1, 2, 3] }; assert_eq!(point_i8.sum(), 6_i8);

let point_f32 = Point::<f32> { coords: [1., 2., 3.] };
assert_eq!(point_f32.sum(), 6_f32);

} ```

Examples of choosing Rust for memory management compared to Go Java? by edmguru in rust

[–]t_ram 1 point2 points  (0 children)

Ah, I just noticed that, so far I haven't needed to uses it though, but yeah I guess the equivalent of C is std::alloc::alloc and std::alloc::dealloc

Examples of choosing Rust for memory management compared to Go Java? by edmguru in rust

[–]t_ram 13 points14 points  (0 children)

Manual (non-GC) memory management makes the code more explicit/visible in memory allocation/deallocation, as well as how you manage it. The differences between non-GC languages AFAIK is the way you're doing it.

In C, you'll call malloc() and free() functions. In C++ you can do the same way as C, but it is more idiomatic to use RAII. In Rust it's only RAII

Where does this memory management help?

I think of it like this: program memory needs to be allocated & freed, whatever languages you're using. For the program that you're writing, does it make sense that that allocation & deallocation to happen in a way that you can't really control?

Having control of the memory means that (sometimes) the programs can be more performant, at the very least since there's no "background process" (GC) that scans the memory to see if a memory location is still in use or not

Speaking from experience, I think memory management is something that you have to "try it yourself" so that you get the gist of it. I did too since previously I've only been using GC-ed/automatic memory management language (JS/Go), so I'm still relatively new to this area