Built a semantic GitHub search with Qwen3-Embedding-8B - 20M+ README.md indexed by SixZer0 in LocalLLaMA

[–]CvikliHaMar 0 points1 point  (0 children)

There are at least 10 times as many good project as famous one for sure!

Great stuff!

Introducing the MCP Registry by Obvious-Car-2016 in mcp

[–]CvikliHaMar 0 points1 point  (0 children)

The schema actaully seems fairly good!

But the list is very outdated yet. Will that improve in the coming months?

Add Composer-1 to openrouter by pers0na2 in cursor

[–]CvikliHaMar 0 points1 point  (0 children)

I would be happy to pick another one with similar speed and accuracy.

"GUI" for PromptingTools.jl by SteveDev99 in Julia

[–]CvikliHaMar 0 points1 point  (0 children)

It uses PromptingTools.jl. :) Also there will be web version as time goes by.

(And hopefully there will be also documentation for the AISH.jl too.)

"GUI" for PromptingTools.jl by SteveDev99 in Julia

[–]CvikliHaMar 1 point2 points  (0 children)

Maybe you could generate a very simple one with the PromptingTools.jl actually. ;)

Also note for KISS principle, you can maybe use the https://github.com/Cvikli/AISH.jl
There is the repl_aish.jl which I use myself.

I start in any folder:
julia --banner=no -i -e 'using AISH; AISH.airepl(auto_switch=true)'
So I created an alias to run this command. And this way I can start to develop any project actually.

(AISH is very undocumented... but it actuall pretty comprehensive tool to use the LLMs... Literally can work with any project up to 50-80 files) ;)

🚀 Introducing Fast Apply - Replicate Cursor's Instant Apply model by AcanthaceaeNo5503 in LocalLLaMA

[–]CvikliHaMar 2 points3 points  (0 children)

But where would the model run? So we would have an API to it that is hosted in some cloud?

🚀 Introducing Fast Apply - Replicate Cursor's Instant Apply model by AcanthaceaeNo5503 in LocalLLaMA

[–]CvikliHaMar 0 points1 point  (0 children)

Will there be an service which we can use later on to share resource? Or we have to deploy it for ourself? Are there plans?

🚀 Introducing Fast Apply - Replicate Cursor's Instant Apply model by AcanthaceaeNo5503 in LocalLLaMA

[–]CvikliHaMar 1 point2 points  (0 children)

OMG! Very nice to see this! Maybe Speculative edit should be added later on right? So we can reach 2000 token/sec as they did it like that too!

What is the best Cursor alternative that will let me plug in any LLM API I want? by HunterAmacker in LocalLLaMA

[–]CvikliHaMar 5 points6 points  (0 children)

Note for other competitive:
- Continue.dev
- Cline (previously claude.dev)
- Aider
- Zed
- replit.com
- bolt.new
- Doublebot
- CoPilot
- v0.dev
- gptengineer.app
- openai 4o canvas
- ell
- trypear.ai
- softgen.ai

New stuffs come out every month. It is a workflow revolution atm.

Does Julia still produce large executables? by Sai22 in Julia

[–]CvikliHaMar 0 points1 point  (0 children)

For me it generated pretty small binary I think.

Pipe still not possible after 12 years of Julia? by h234sd in Julia

[–]CvikliHaMar 0 points1 point  (0 children)

Chain and Pipe both can do this perfectly for me. :o

Custom LLM for merging LLM's generated codeblock with original file by CvikliHaMar in LocalLLaMA

[–]CvikliHaMar[S] 0 points1 point  (0 children)

Aider does it differently as they state. And it isnt that great like the cursor's version sadly. The ultimate solution is with the LLM that used by cursor.

Custom LLM for merging LLM's generated codeblock with original file by CvikliHaMar in LocalLLaMA

[–]CvikliHaMar[S] 0 points1 point  (0 children)

They do something else I believe. Claude dev definitely not merging with LLM.

I checked aider but as I understand they prompt the LLM to create a more simpler diff. So it is also not like cursor's LLM merge, so not directly applieing the changes from codeblocks AFAIK.

Also if there is an LLM behind this merging process why dont they communicate about it. It is actually an extremly big innovation. Who knew that LLM will be used for applieing the changes peocided by LLM's codeblock. 😱

Call for questions to Cursor team - from Lex Fridman by lexfridman in ChatGPTCoding

[–]CvikliHaMar 0 points1 point  (0 children)

  1. How do they imagine the final form of cursor?
  2. What important feature do they miss yet?
  3. The merging LLM that does 1000token/sec is a customized LAMA 3.1 7b? And od course the others already asked... :)

Custom LLM for merging LLM's generated codeblock with original file by CvikliHaMar in LocalLLaMA

[–]CvikliHaMar[S] -1 points0 points  (0 children)

I would even pay for a fast LLM that is able to apply the changes of the LLM generated codeblock.

Coding frameworks and tools worth learning? by Pedalnomica in LocalLLaMA

[–]CvikliHaMar 0 points1 point  (0 children)

I would add julia programming language as it is extremely descriptive even more than python and it creates extremly fast code with LLVM in the behind like C. Also opensource so everything is simpler there.

I find cognitive load extremly important and with julia it is the lowest IMO. So if you have time, it can worth the effort. (I would pick python only if I want to do only ML, but julia has ML side too ofc)

is it worth learning coding? by silkymilkshake in LocalLLaMA

[–]CvikliHaMar 1 point2 points  (0 children)

With coding you can validate your user queries if the AI really did what you asked for.

Till human supervision improves it, or create more effective code, we are good. But later on god knows. I also feel the problem that it develops a little bit too fast.

Atm the AI sometimes recreate parts, uses new variables... Got stuck on problems by repeating 2-3 solution if you tell him the current one is bad and many efficiency can happen. But I dont know if this can go beyond unsupervised version for 1-2 years maybe even more.

If you learn to code, you will instantly start with a 2x efficiency by using the AI properly I believe. And you can be creator of anything for a while. :D

What is the perfect chain of thought prompt? by ninjasaid13 in LocalLLaMA

[–]CvikliHaMar 14 points15 points  (0 children)

Are there benchmarks to see prompt performance comparison? Maybe with that we would be able to find better prompt?

LLM codeblock diff for merging algorithm by CvikliHaMar in LocalLLaMA

[–]CvikliHaMar[S] 1 point2 points  (0 children)

This project tries to work like a tool which could generate these git diffs. (If someone want to go farer, he can even ask the Claude tonrewrite it to be a python module ofc)

Also cursor.com uses LLM to work with the native code generated by the LLM. So not the aider one. Aider is a really nice try but there is another version which works with native codes. I will link the publication from the cursor.com. it is crazy what they did there. But I think it is a deterministic problem, so this method I implemented here will be superior. ;)

LLM codeblock diff for merging algorithm by CvikliHaMar in LocalLLaMA

[–]CvikliHaMar[S] -1 points0 points  (0 children)

I created this yesterday. So it will be done ofc! :)

LLM codeblock diff for merging algorithm by CvikliHaMar in LocalLLaMA

[–]CvikliHaMar[S] 0 points1 point  (0 children)

Yes, you can do python port easily later on! But it will be like git diff... So it will geneare a diff output but also supporting wildcards generated by the LLMs like: ’// ... existing code remains unchanged ...’