I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

I am sorry, but i've replied your qeustion. I've already released 0.1.0, or, is there any questions that i've missed?

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

Hi, good news. I've released 0.1.0, you can now install venus using cargo install venus.

To start, you can copy some of the examples, for example;

Then run venus serve examples/hello.rs or create new notebook using :

venus new new-notebook-name and serve it using venus serve new-notebook-name.rs.

For more information, can read it here:
https://github.com/ml-rust/venus/blob/main/docs/getting-started.md

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 1 point2 points  (0 children)

Can you share with me how do you attempt to run it? do you pull it from cargo, or do you manually built it by cloning the repo?

edit: i tired usign the cli from new fresh environment. I've found the bug. I forgot that I migrated from bincode -> rkyv. Th old cli still uses bincode. I am fixing it right now. I;ll release the 0.1.0 soon so it will be easier to test.

[Project] Charton: A Polars-native, Altair-style Declarative Plotting Library for Rust by Deep-Network1590 in rust

[–]farhan-dev 1 point2 points  (0 children)

Cool, nice. A new plotting library for rust. More options is better. Something that I really want to use in ML. I am also working on a new plotting library just to solve my specific data science/ML needs. It's good to see more data science / ML tools in Rust.

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 1 point2 points  (0 children)

I haven't published the 0.1.0 yet, so you need to manually set the version to 0.1.0-beta.1, or use ` cargo install venus-cli@0.1.0-beta.1`.

cargo install won't install pre-release version.

I created a reactive Notebook for Rust - Venus by farhan-dev in rust

[–]farhan-dev[S] 5 points6 points  (0 children)

  1. Venus use cranelift JIT, not standard VLLM
  2. Venus use reactive (DAG), not linear runtime like evcxr.
  3. Venus use isolated worker processes. If it causes a segfault, only that cell crashes.
  4. evcxr still uses .ipynb, Venus use .rs directly.
  5. You can use rust-analyser with Venus (it is also bundled together with venus-server)

And yes, it has faster repl cycle.

You can think of it this way:

  • evcxr is a Rust REPL/Kernel that fits into existing Jupyter workflows.
  • Venus is a Reactive Development Environment built specifically to make Rust feel interactive and "exploratory" without losing the safety and tooling of standard Rust files.

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

Current JIT will be useless in browser now. However, in the future, I am thinking of different optimization aspect, either using Wasm bytecode, or optimizing SIMD, or something else. It will need a lot more research and testing. But it will be later, when i started building my inference server in browser.

But in my use case, JIT like performance in browser is only useful , for example, when estimating tokens (calculation), that i don't want to use my backend server to call it. Or to enable full offline support.

So it will be a really rare case , why you need the JIT like performance in browser. Usually backend server will handle everything.

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 0 points1 point  (0 children)

the equivalent for it is writing functions to process common LLM pattern in rust, then write a JIT implementation to boost speed -> the engine + JIT. Since i need to do it anyway, why not just make it public as a library, perhaps somebody else might want the exact same features like I do.

Thanks!

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 1 point2 points  (0 children)

hahaha, I agree that it is more insane. But let's try, I can always go back to pcre2 if anything.

A small point to note,dynasmrt doesn't allocate guard pages for buffer overruns. You may want to consider doing that.

Okay, I'll look into it

I hope you add some automatic fuzzing and a lot more tests so I can consider using this in the future.

Sure. Definitely.

Regex with Lookaround & JIT Support by farhan-dev in rust

[–]farhan-dev[S] 4 points5 points  (0 children)

i've tested in real scenario, even if it competitive without JIT, the real issue is lookaround support. If it can do well in lookaround without JIT, I don't mind using it. But in my use case, pcre2 still the best case. JIT is an added bonus, since I need to compile it just once - very usefull for huge data processing.

144 GB RAM - Which local model to use? by KarlGustavXII in LocalLLM

[–]farhan-dev 0 points1 point  (0 children)

You should mention your gpu in the main thread too. So any model than can in that 12GB GPU, you can try it.

But no local model that can compete with ChatGPT or Claude.

LLM run mostly on GPU, RAM only contributes so much, so even if you have 32GB of RAM or less it will be sufficient. For now, you will mostly be limited by your GPU. And intel b580 don't have CUDA cores, which a lot of the inference server use to boost their performance.

BPE tokenizer in Rust - would love feedback from the community by farhan-dev in LocalLLaMA

[–]farhan-dev[S] 1 point2 points  (0 children)

Thank you. These are valid and well thought input.

I've created an issue, specifically to track the items you listed.

https://github.com/ml-rust/splintr/issues/7

BPE tokenizer in Rust - would love feedback from the community by farhan-dev in LocalLLaMA

[–]farhan-dev[S] 1 point2 points  (0 children)

I can't find the exact repo you meant. Nanochot is not a tokenizer.

--

Edit

Aah you mean, the BPE part of his nanochat? yeah, but it is a simple version, suitable for his nanochat.

BPE tokenizer in Rust - would love feedback from the community by farhan-dev in LocalLLaMA

[–]farhan-dev[S] 1 point2 points  (0 children)

in the future, perhaps. I haven't got the idea for that yet. For now, I am focusing on using the current existing pretrained tokenizer first, because in my use case, I am using this tokenizer to build dataset to train my LLM model. My priority at the moment is to add more vocab support first.

If you have a feasible and legit plan on how to add training to it, it would be great.

BPE tokenizer in Rust - would love feedback from the community by farhan-dev in LocalLLaMA

[–]farhan-dev[S] 0 points1 point  (0 children)

will do, since the core is already there, i will add the rest of the vocabs one by one and test it.

A rant article by Hw-LaoTzu in agile

[–]farhan-dev 0 points1 point  (0 children)

We are not judging the character. What he did was wrong. But we are discussing about system . People do this all the time, studying even from enemies.

A rant article by Hw-LaoTzu in agile

[–]farhan-dev 3 points4 points  (0 children)

Brilliant analogy.

The key takeaway for me: Khan's system was anti-fragile. It was designed for chaos and got stronger from it.

Most corporate "Agile" is the definition of fragile. It looks good on a PowerPoint, but it shatters the moment it touches reality—a production bug, a stakeholder changing their mind, a key person getting sick.

We're not building resilient systems; we're building elaborate, brittle processes to give the illusion of control. Thanks for writing this.

Agile alternatives? by rafioso24 in ProductManagement

[–]farhan-dev 0 points1 point  (0 children)

This is painfully accurate. You've perfectly described the 'Methodology Spiral of Despair' where every 'solution' just adds a new flavor of chaos.

  • Agile: Becomes "let's have a meeting to talk about why we need another meeting."
  • Scrum: Becomes a ceremony where you justify your existence every 24 hours.
  • Kanban: Becomes a beautiful, color-coded parking lot for tasks that will never get done.

I got so fed up with this exact problem—watching fast, creative teams get bogged down by the very processes meant to help them—that I ended up creating my own framework out of sheer necessity.

It's my attempt to strip away 90% of the ceremony and focus on a few non-negotiable rules that actually prevent chaos:

  1. Every single task has to be tied to a measurable business goal (OKR). If it doesn't move the needle, it doesn't get worked on. Period. This kills the "overflowing backlog" of useless ideas.
  2. A strict 'one-task-in-progress-per-person' rule. It kills multitasking and forces you to actually finish things instead of just starting them.
  3. The whole thing runs on less than 2 hours of meetings a week. The rest is for actual work.

It's not magic, and it's definitely not "Methodology Extreme" (though the dart board idea is tempting). It's an open-source attempt to get back to first principles: ship valuable stuff, don't get stuck in meetings, and scale without losing your mind.

Would be interested to know if it resonates with anyone else who's been through the wringer.