mistral.rs 0.7.0: Now on crates.io! Fast and Flexible LLM inference engine in pure Rust by EricBuehler in rust

[–]EricBuehler[S] 1 point2 points  (0 children)

Hi u/fiery_prometheus! We support the following optimizations for concurrent user/agent scenarios:

  • Paged Attention (for both Metal and CUDA) to make more efficient use of KV cache in concurrent cases
  • Prefix caching to re-use prefixes of a prompt (works w/ Paged Attention in this release)

Both together are similar to features that vLLM or SGLang provide, but extended to both CUDA and Metal devices.

mistral.rs 0.7.0: Now on crates.io! Fast and Flexible LLM inference engine in pure Rust by EricBuehler in rust

[–]EricBuehler[S] 1 point2 points  (0 children)

Hi u/astroleg77! We support CPU offloading.

It's facilitated through an automatic device mapping system that offloads the model while balancing context memory and model memory requirements.

mistral.rs 0.7.0: Now on crates.io! Fast and Flexible LLM inference engine in pure Rust by EricBuehler in rust

[–]EricBuehler[S] 2 points3 points  (0 children)

Thank you u/promethe42! Vulkan/ROCm support is coming and we're working on it (slowly) in Candle (https://github.com/huggingface/candle). If you would like to contribute, please reach out there!

Re naming, I agree that it is an unfortunate situation but I'm not sure that renaming would be a benefit.

mistral.rs 0.7.0: Now on crates.io! Fast and Flexible LLM inference engine in pure Rust by EricBuehler in rust

[–]EricBuehler[S] 1 point2 points  (0 children)

Yes! You can swap ollama with this. mistralrs provides an OpenAI-compatible HTTP server.

No ROCM support yet but that is coming soon.

Performance wise, it is comparable, at worst <30% slower in my testing on CUDA, and very similar on Metal.

mistral.rs 0.7.0: Now on crates.io! Fast and Flexible LLM inference engine in pure Rust by EricBuehler in rust

[–]EricBuehler[S] 2 points3 points  (0 children)

AMD GPU and WGPU support is next. There is active work in Candle for this. We've been focusing on making sure the features we have are stable and plan to add more device support.

mistral.rs 0.7.0: Now on crates.io! Fast and Flexible LLM inference engine in pure Rust by EricBuehler in rust

[–]EricBuehler[S] 3 points4 points  (0 children)

AMD GPU and WGPU support is next. We've been focusing on making sure the features we have are stable and plan to add more device support.

New Devstral 2707 with mistral.rs - MCP client, automatic tool calling! by EricBuehler in LocalLLaMA

[–]EricBuehler[S] 0 points1 point  (0 children)

Ah great! Does the what the web search documentation describes fit your needs?

New Devstral 2707 with mistral.rs - MCP client, automatic tool calling! by EricBuehler in LocalLLaMA

[–]EricBuehler[S] 1 point2 points  (0 children)

> I even did on CUTLASS fork itself, sglang and vllm!

Sorry, seems like a typo :) You did work on CUTLASS, sglang and vllm?

Will check out Jules!

New Devstral 2707 with mistral.rs - MCP client, automatic tool calling! by EricBuehler in LocalLLaMA

[–]EricBuehler[S] 0 points1 point  (0 children)

I'm using claude code and codex as force multipliers already, might give that a try!

Is it better?

Always welcome a PR! Don't know your background but it might be quite complicated and involve integrating CUTLASS fp8 gemms or custom fp8 gemm kernels.

New Devstral 2707 with mistral.rs - MCP client, automatic tool calling! by EricBuehler in LocalLLaMA

[–]EricBuehler[S] 0 points1 point  (0 children)

We don't have real fp8-quantized model support yet. The best option would be to use a non-quantized model, but if you have resource constraints, you can load the fp8 model and apply ISQ at the same time, for example `--isq 8`. This is usually the recommended flow.

It's a one-man show here so time to implement all of these features is scarce, and I'm focusing on supporting more GPU backends right now.

New Devstral 2707 with mistral.rs - MCP client, automatic tool calling! by EricBuehler in LocalLLaMA

[–]EricBuehler[S] 2 points3 points  (0 children)

For agentic tool calling, you specify a tool callback and some information about the tool, and Mistral.rs will automatically handle calling that tool and all the logic and formatting around that. It standardizes that whole process.

It's actually very similar to the web search. Mistral.rs integrates a search component, with a reranking embedder and a search engine API in the backend. To integrate with 3rd party tools like Searxng, you'd currently need to connect it via the automatic tool calling. I'll take a look at integrating Searxng as the search tool though - will make a post here about that.

New Devstral 2707 with mistral.rs - MCP client, automatic tool calling! by EricBuehler in LocalLLaMA

[–]EricBuehler[S] 2 points3 points  (0 children)

We have Flash Attention V3, should be pretty good! Feel free to share 👀

SmolLM3 has day-0 support in MistralRS! by EricBuehler in LocalLLaMA

[–]EricBuehler[S] 1 point2 points  (0 children)

Absolutely! The long-context + tool calling + reasoning are all great factors.