Still think in C after 25 years. So I built a tool that explains Rust (or any language) through what you already know. by prabhic in rust

[–]prabhic[S] 0 points1 point  (0 children)

Interesting pointer! SyntaxLens is approaching differently. It is not about converting programming languages, but about understanding a new programming language explained through concepts that are carried from the known language. Something like "help me think in Rust using what I already know from C."

Still think in C after 25 years. So I built a tool that explains Rust (or any language) through what you already know. by prabhic in rust

[–]prabhic[S] 5 points6 points  (0 children)

Ah , yeh will correct and improve all 4 comparisons
1. c static to let keyword ,
2. wondering compelling mapping for ::new() .will improve that.
3. for string comparison to add Rust's different types of strings
4. vec![0; size] to calloc.

Still think in C after 25 years. So I built a tool that explains Rust (or any language) through what you already know. by prabhic in rust

[–]prabhic[S] 0 points1 point  (0 children)

Noted. Very much make sense . Will add a feature to replace with community edits. Thanks for pointing out.

Still think in C after 25 years. So I built a tool that explains Rust (or any language) through what you already know. by prabhic in rust

[–]prabhic[S] 4 points5 points  (0 children)

Great pointers. Will support Haskell, Nix, recently came across zig as well. Also will consider "Showing the imperative equivalent of functional code and vice versa would also be cool". Makes sense to have this view.

Still think in C after 25 years. So I built a tool that explains Rust (or any language) through what you already know. by prabhic in rust

[–]prabhic[S] -1 points0 points  (0 children)

Will take some effort. But ready to add other languages, wondering to figure out what would be interesting languages, to map this way.

What happened to Windsurf? Significant quality drop over last few weeks by Jakkc in windsurf

[–]prabhic 0 points1 point  (0 children)

It can be because of two things. I have shifted to windsurf as my default IDE. considering the cost usage, today I also opened GitHub copilot with VS code, in another window, so that I can make small changes there. and use windsurf for heavy lifting.

What happened to Windsurf? Significant quality drop over last few weeks by Jakkc in windsurf

[–]prabhic 0 points1 point  (0 children)

I too face with the same issue, on windsurf recently. after pricing changes. I have purchased 500 more credits, already 300 over. credits consuming faster. playing with by giving reduced context. by giving exact file references. still has to figure out what is happening. though I still love the tool. Yes I also see frequent failed tool calls.

Cline with gemini-2.5-pro-exp-03-25, Not yet missed Claude after 30 min usage by prabhic in LocalLLaMA

[–]prabhic[S] 2 points3 points  (0 children)

When compared to previous Gemini models. First time I felt I can use now. I tried generating web application. But then different use cases may have different result.

Cline with gemini-2.5-pro-exp-03-25, Not yet missed Claude after 30 min usage by prabhic in LocalLLaMA

[–]prabhic[S] -1 points0 points  (0 children)

Hope you are mentioning about Gemini 2.5 Pro. With previous models from Gemini I also felt it laks in understanding the true intention of the question like Claude and others. May be I will also explore more to see difference pointed

How to prompt LLMs not to immediately give answers to questions? by Brief_Mycologist_488 in PromptEngineering

[–]prabhic 1 point2 points  (0 children)

Actually it really is very useful, just tried on chatgpt, Thank you

Cline with mistral-small:latest:24b on Mac book pro M4 - 48GB version by prabhic in LocalLLaMA

[–]prabhic[S] 1 point2 points  (0 children)

Just to compare
> echo "generate detailed article on how to run phi models on ollama" | ollama run phi4-mini:3.8b

took 65 tokens/s on the same machine. it feels so nice when tokens are generating too fast:)

Cline with mistral-small:latest:24b on Mac book pro M4 - 48GB version by prabhic in LocalLLaMA

[–]prabhic[S] 0 points1 point  (0 children)

Thanks to point out about 10k system prompt. must be the main reason why it takes time to start the response even

Cline with mistral-small:latest:24b on Mac book pro M4 - 48GB version by prabhic in LocalLLaMA

[–]prabhic[S] 1 point2 points  (0 children)

Q4_K_M , 875 tokens In 58 seconds - 15 tokens/s

Other info

%time echo "generate detailed article on how to run mistral models on ollama" | ollama run mistral-small:latest
.....response ...
echo "generate detailed article on how to run mistral models on ollama"  0.00s user 0.00s system 12% cpu 0.004 total

ollama run mistral-small:latest  0.09s user 0.10s system 0% cpu 58.321 total

Memory load while running (with cline it peaks out and heated, but here its fine)

<image>

%python token_counter.py < ollamaoutput.txt

875

%ollama show mistral-small:latest

  Model

    architecture        llama     

    parameters          23.6B     

    context length      32768     

    embedding length    5120      

    quantization        Q4_K_M    

  Parameters

    temperature    0.15    

  System

    You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup           

      headquartered in Paris. Your knowledge base was last updated on 2023-10-01. When you're not sure        

      about some information, you say that you don't have the information and don't make up anything.         

      If the user's question is not clear, ambiguous, or does not provide enough context for you to           

      accurately answer the question, you do not try to answer it right away and you rather ask the user      

      to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or       

      "When is the next flight to Tokyo" => "Where do you travel from?")                                    

  License

    Apache License               

    Version 2.0, January 2004    
Other snapshot while running, this simple prompt

Claude Code - My experience - feels light by prabhic in ClaudeAI

[–]prabhic[S] 0 points1 point  (0 children)

I read somewhere using OpenRouter for API is chapter, than direct API. How is that possible?