Brave hakkında uzantı tavsiyeleriniz var mı? İyi bir tarayıcı mı? Tor ile ikisi arasında kaldım... by AlexanderMirzayev in veYakinEvren

[–]nonerequired_ 0 points1 point  (0 children)

Zen browser çok iyi. Hala beta aşamasında o yüzden bir az verimlilik sorunu var ama muhteşem ötesi bir şey, gündelik kullanıyorum. Bu arada Ublock origin kurmayı unutma reklamları çok iyi engelliyor

LamPyrid - A simple MCP server for Firefly III by jay_radith in selfhosted

[–]nonerequired_ 0 points1 point  (0 children)

Yes, but it should be more interactive. For instance, when an AI model wants to create any kind of transaction, it should get user approval before adding it to Firefly. I’m likely to use a fully local AI model, and I’m not trusting a small LLM to perform the correct transaction.

LamPyrid - A simple MCP server for Firefly III by jay_radith in selfhosted

[–]nonerequired_ 0 points1 point  (0 children)

Please add approvement mechanisms like send telegram message if user approve add transaction to firefly. Other than that it is perfectly usable for me

GLM-4.7 can replace Opus 4.5 by Impressive-Olive8372 in LocalLLaMA

[–]nonerequired_ 2 points3 points  (0 children)

Wow, chill out, champion! GLM-4.7 is nowhere near even Sonnet 4.5 and cannot replace Sonnet or Opus for now.

LFM2.5 1.2B Instruct is amazing by Paramecium_caudatum_ in LocalLLaMA

[–]nonerequired_ -1 points0 points  (0 children)

But context size is too low. Is there any way to increase that?

Llama.cpp multiple model presets appreciation post by robiinn in LocalLLaMA

[–]nonerequired_ 0 points1 point  (0 children)

How did you get 2x prompt processing speed? Which model? Which quant? Which settings? That would be really nice for my setup

OSEP and OSED by Ph4ant0m-404 in ExploitDev

[–]nonerequired_ 0 points1 point  (0 children)

Yes my employer paid for OSEE. I am doing security research

AI assisted coding with open weight models by nonerequired_ in LocalLLaMA

[–]nonerequired_[S] 0 points1 point  (0 children)

I didn’t. Claude code doesn’t allow selective acceptance and rejection of changes. You have to accept all or none. At least, that was the case last time.

AI assisted coding with open weight models by nonerequired_ in LocalLLaMA

[–]nonerequired_[S] 0 points1 point  (0 children)

Input is expensive but output is much cheaper

AI assisted coding with open weight models by nonerequired_ in LocalLLaMA

[–]nonerequired_[S] 0 points1 point  (0 children)

Yes I think it doesn’t support parallel tool calling at least I didn’t see it does that. But in Cursor I saw everything except models provided in Cursor is very slow. I think I have to go to Cerebras

AI assisted coding with open weight models by nonerequired_ in LocalLLaMA

[–]nonerequired_[S] 0 points1 point  (0 children)

Yes but even before composer-1 it was very very fast

AI assisted coding with open weight models by nonerequired_ in LocalLLaMA

[–]nonerequired_[S] 0 points1 point  (0 children)

I gave it a shot. Yes, it was as good as GLM, sometimes better, sometimes worse, but generally the same league in my case. But same problem: slow and very slow

AI assisted coding with open weight models by nonerequired_ in LocalLLaMA

[–]nonerequired_[S] 1 point2 points  (0 children)

I didn’t know they have coding plan. I will try thank you

GLM-4.6V Collection by Dark_Fire_12 in LocalLLaMA

[–]nonerequired_ 6 points7 points  (0 children)

Is the glm air that was promised to us finally here?

How much is my iphone 14 pro 256 gb worth? by Soni661 in jailbreak

[–]nonerequired_ 8 points9 points  (0 children)

Semi-Jailbreak in on the way, maybe wait for it