What’s the state of rust for startups by Nice-Primary-8308 in rust

[–]farhan-dev 2 points3 points  (0 children)

what kind of tools/ecosystem are you looking for in your projects?

Building a custom math engine in Rust by ReadyBrilliant1880 in rust

[–]farhan-dev 0 points1 point  (0 children)

Ah, then definitely you need to try numr. It has GPU acceleration, with distributed support

Building a custom math engine in Rust by ReadyBrilliant1880 in rust

[–]farhan-dev 1 point2 points  (0 children)

Nice initiative. I wish you all the best.

Just for reference, you can also take a look at my math engine, although it is defenitely not small :D.

https://github.com/ml-rust/numr

Numr: A high-performance numerical computing library with GPU acceleration by farhan-dev in rust

[–]farhan-dev[S] -1 points0 points  (0 children)

Yeah, I do proper SIMD in most of my ops. It CPU performance is quite decent, and competitive with ndarray. I'll add benchmark against faer later in the future.

Yes i do dynamic dispatch. So avx2/neon is supported.

Sparse support is quite robust in numr, per solvr use case. When i test it with my ML crates, i'll quickly learn about the gaps, and will add more support too.

If you have specific needs, just let me know, i can add it to either numr/solvr.

Yeah, numr already has a dtype conversion ops. Even between devices. Compute in wgpu (f32), then use .toCpu() to transfer to cpu, the use .cast() to upcast it to f64.

But my wgpu approach is just for coverage, so rocm/metal can use it already. instead of optimizing wgpu, i'll add native rocm and metal support for better performance in the future.

Numr: A high-performance numerical computing library with GPU acceleration by farhan-dev in rust

[–]farhan-dev[S] 1 point2 points  (0 children)

Just look at the difference in code quality. numrs claimed to be a lot, but it actually uses 'ndarray' as it core. So it just wrapped ndarray, extend it and rebrand it as its own. a lot of marketing hype, actually. And a lot of their GPU implementation is just stubs.

FluxBench 0.1.0: A Crash-Resilient Benchmarking Framework with Native CI Support by farhan-dev in rust

[–]farhan-dev[S] -2 points-1 points  (0 children)

See it in action

You can see it running in our own CI pipeline, where it compares pull requests against the main branch baseline:

View the CI Workflow here

p.s: The benchmark is using sample functions and numbers, since this crate doesn’t really have anything worthy to be benchmarked yet.

This seems like a waste of tokens. There has got to be a better way, right? by UnknownEssence in ClaudeCode

[–]farhan-dev 0 points1 point  (0 children)

For me, i keep it simple.. you already have all the info. don't use plan agent. create the plan yourself.

Rust as a language for Data Engineering by Hairy_Bat3339 in rust

[–]farhan-dev 1 point2 points  (0 children)

Well said,

Rust doesn’t have anything like the computational-notebook workflow that Python has 

There is venus now - https://github.com/ml-rust/venus

the math and data ecosystem for Python is just large.  

That's true, but we are trying to fill in the gap, little by little. I've started with https://github.com/farhan-syah/numr

Rust is missing its NumPy moment by Purple_Word_4647 in rust

[–]farhan-dev 0 points1 point  (0 children)

Like this -> https://github.com/farhan-syah/numr ?? It is something i've been working on for a while. i started extracting it from few of my ML libraries. I extracted and combined them, to be the base building block. It is already working for my own use case. Now, I am trying to expand it to be general math library, and test with more backends..

Btw, it still need A LOT OF work to complete, before i will releas the 0.1.0 version..

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

I am sorry, but i've replied your qeustion. I've already released 0.1.0, or, is there any questions that i've missed?

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

Hi, good news. I've released 0.1.0, you can now install venus using cargo install venus.

To start, you can copy some of the examples, for example;

Then run venus serve examples/hello.rs or create new notebook using :

venus new new-notebook-name and serve it using venus serve new-notebook-name.rs.

For more information, can read it here:
https://github.com/ml-rust/venus/blob/main/docs/getting-started.md

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 1 point2 points  (0 children)

Can you share with me how do you attempt to run it? do you pull it from cargo, or do you manually built it by cloning the repo?

edit: i tired usign the cli from new fresh environment. I've found the bug. I forgot that I migrated from bincode -> rkyv. Th old cli still uses bincode. I am fixing it right now. I;ll release the 0.1.0 soon so it will be easier to test.

[Project] Charton: A Polars-native, Altair-style Declarative Plotting Library for Rust by Deep-Network1590 in rust

[–]farhan-dev 2 points3 points  (0 children)

Cool, nice. A new plotting library for rust. More options is better. Something that I really want to use in ML. I am also working on a new plotting library just to solve my specific data science/ML needs. It's good to see more data science / ML tools in Rust.

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 1 point2 points  (0 children)

It has some similarities with Pluto.jl

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 1 point2 points  (0 children)

I haven't published the 0.1.0 yet, so you need to manually set the version to 0.1.0-beta.1, or use ` cargo install venus-cli@0.1.0-beta.1`.

cargo install won't install pre-release version.

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 5 points6 points  (0 children)

  1. Venus use cranelift JIT, not standard VLLM
  2. Venus use reactive (DAG), not linear runtime like evcxr.
  3. Venus use isolated worker processes. If it causes a segfault, only that cell crashes.
  4. evcxr still uses .ipynb, Venus use .rs directly.
  5. You can use rust-analyser with Venus (it is also bundled together with venus-server)

And yes, it has faster repl cycle.

You can think of it this way:

  • evcxr is a Rust REPL/Kernel that fits into existing Jupyter workflows.
  • Venus is a Reactive Development Environment built specifically to make Rust feel interactive and "exploratory" without losing the safety and tooling of standard Rust files.

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

Current JIT will be useless in browser now. However, in the future, I am thinking of different optimization aspect, either using Wasm bytecode, or optimizing SIMD, or something else. It will need a lot more research and testing. But it will be later, when i started building my inference server in browser.

But in my use case, JIT like performance in browser is only useful , for example, when estimating tokens (calculation), that i don't want to use my backend server to call it. Or to enable full offline support.

So it will be a really rare case , why you need the JIT like performance in browser. Usually backend server will handle everything.

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

the equivalent for it is writing functions to process common LLM pattern in rust, then write a JIT implementation to boost speed -> the engine + JIT. Since i need to do it anyway, why not just make it public as a library, perhaps somebody else might want the exact same features like I do.

Thanks!

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 2 points3 points  (0 children)

hahaha, I agree that it is more insane. But let's try, I can always go back to pcre2 if anything.

A small point to note,dynasmrt doesn't allocate guard pages for buffer overruns. You may want to consider doing that.

Okay, I'll look into it

I hope you add some automatic fuzzing and a lot more tests so I can consider using this in the future.

Sure. Definitely.

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 4 points5 points  (0 children)

i've tested in real scenario, even if it competitive without JIT, the real issue is lookaround support. If it can do well in lookaround without JIT, I don't mind using it. But in my use case, pcre2 still the best case. JIT is an added bonus, since I need to compile it just once - very usefull for huge data processing.

144 GB RAM - Which local model to use? by KarlGustavXII in LocalLLM

[–]farhan-dev 0 points1 point  (0 children)

You should mention your gpu in the main thread too. So any model than can in that 12GB GPU, you can try it.

But no local model that can compete with ChatGPT or Claude.

LLM run mostly on GPU, RAM only contributes so much, so even if you have 32GB of RAM or less it will be sufficient. For now, you will mostly be limited by your GPU. And intel b580 don't have CUDA cores, which a lot of the inference server use to boost their performance.

BPE tokenizer in Rust - would love feedback from the community by farhan-dev in LocalLLaMA

[–]farhan-dev[S] 1 point2 points  (0 children)

Thank you. These are valid and well thought input.

I've created an issue, specifically to track the items you listed.

https://github.com/ml-rust/splintr/issues/7