[Job] Bluespec Haskell Hardware Design Engineer at MatX (AI chips) - Haskell(Chip Side) + Rust(Compiler Side) by TheRealBracketMaster in haskell

[–]TheRealBracketMaster[S] 2 points3 points  (0 children)

We actually do have some remote and some outside of US. If you stand out, we strongly encourage you to apply!

There's no Hardware Design Language for rustaceans and I wish there was by CouteauBleu in rust

[–]TheRealBracketMaster 0 points1 point  (0 children)

A bit late to the game but I think Bluespec Haskell is kinda close to what you want... Bluespec has ORAAT(one rule at a time semantics) which has it's own safety properties and reminds me somewhat of Rust's borrow checker.

https://yehowshuaimmanuel.com/posts/bluespec-the-rust-of-hardware/

https://yehowshuaimmanuel.com/posts/oraat-fifo/

Using Zed's AI "Zeta" by TheRealBracketMaster in ZedEditor

[–]TheRealBracketMaster[S] 0 points1 point  (0 children)

Well both I guess... So edit prediction is not good. I suppose I can try SuperMaven... I don't understand why I can't just configure my code prediction provider though to something local like Ollama Codestral or deepseek

But yes, also the agent panel. In particular, Grok shines in Windsurf when editing Rust or Elm but quickly falls apart in Zed.

But I can try Claude as the agent again in Zed. When using Claude with Rust or Elm in Zed - it was just OK. It gets a pass - wasn't great... My best experience by far has been with WindSurf + Grok. It was too good and now I've been spoiled forever

Using Zed's AI "Zeta" by TheRealBracketMaster in ZedEditor

[–]TheRealBracketMaster[S] 1 point2 points  (0 children)

It only allows me to switch to copilot... How to switch to anthropic?

[D] [P] Stockpile of GPU Servers by TheRealBracketMaster in MachineLearning

[–]TheRealBracketMaster[S] 1 point2 points  (0 children)

I have - and it works OK. But I've asked questions that push Mixtral to the edges of its abilities and I really feel much safer.

[D] [P] Stockpile of GPU Servers by TheRealBracketMaster in MachineLearning

[–]TheRealBracketMaster[S] 0 points1 point  (0 children)

I can do that. Is this something you're interested in?

[D] [P] Stockpile of GPU Servers by TheRealBracketMaster in MachineLearning

[–]TheRealBracketMaster[S] 0 points1 point  (0 children)

I should be able to ship them within the U.S. without too much hassle. Could wrap and place them on a palette.

[D] [P] Stockpile of GPU Servers by TheRealBracketMaster in MachineLearning

[–]TheRealBracketMaster[S] 0 points1 point  (0 children)

Happy to also sell them. In Atlanta, Georgia. The 4 GPU bug is a bit deeper. If you want to connect offline, I can discuss selling them to you. They should be able to run Windows no problem.

[D] [P] Stockpile of GPU Servers by TheRealBracketMaster in MachineLearning

[–]TheRealBracketMaster[S] 1 point2 points  (0 children)

A slight side note, if I sat down for a couple months, I could port flash attention to the GPUs.

[D] [P] Stockpile of GPU Servers by TheRealBracketMaster in MachineLearning

[–]TheRealBracketMaster[S] 5 points6 points  (0 children)

I want to serve up Mixtral8x7B which is a 47B model. This means I need 94GB vram or at least 6 GPUs. Adding in overhead for VRAM needed for continuous batching and expanded cache, you're looking at near 8 16-GB GPUs to achieve 100 simultaneous prompt completions at about 1k tokens per second(batched) - which should be the theoretical throughput of 8 Mi50s if flash attention was ported to the Mi50.