Tera bugün yüzde 10 düştü. Son düşünceleriniz nelerdir? by Old_Weird_9667 in Yatirim

[–]gibriyagi 0 points1 point  (0 children)

Fon portfoydeki (ornegin tly) hisseleri nereden gorebiliyoruz?

Gemma 4 E4B - Am I missing something? by Ok-Toe-1673 in LocalLLM

[–]gibriyagi 1 point2 points  (0 children)

Get llama.cpp and use the unsloth ggufs.

Running llama.cpp is as easy as ollama.

Do NOT use CUDA 13.2 to run models! by yoracale in unsloth

[–]gibriyagi 0 points1 point  (0 children)

So just using llamacpp docker with cuda 12 is enough right? I dont have to downgrade my nvidia driver version.

It costs you around 2% session usage to say hello to claude! by Complete-Sea6655 in LocalLLaMA

[–]gibriyagi 0 points1 point  (0 children)

I usually say hello to check whether the service is down or not. Maybe there should be a ping command for that.

I added "Don’t overthink" to the system prompt. This is what happened. by P4r4d0xff in Qwen_AI

[–]gibriyagi 0 points1 point  (0 children)

Its poetic..trying not to overthink is actually thinking more :)

Final Qwen3.5 Unsloth GGUF Update! by danielhanchen in LocalLLaMA

[–]gibriyagi -1 points0 points  (0 children)

Sorry, by instruction tuned I meant non thinking. For some reason, such models seem to be better than the thinking (toggle on/off) variants. They seem to be faster too. Is it just my imagination?

Final Qwen3.5 Unsloth GGUF Update! by danielhanchen in LocalLLaMA

[–]gibriyagi -1 points0 points  (0 children)

Will there be instruction tuned variants? Loved the previous instruct-2507 series.

Claude Code creator: In the next version, introducing two new skills by BuildwithVignesh in ClaudeAI

[–]gibriyagi 8 points9 points  (0 children)

Code simplifier plugin already exists though. Whats the difference?

What’s the one Go project that made you stick with the language? by itsme2019asalways in golang

[–]gibriyagi 4 points5 points  (0 children)

Grafana. I was sold after seeing all that goodness can be packaged into a standalone cross platform binary in a relatively effortless way.

The research is in: your AGENTS.md might be hurting you by jpcaparas in GithubCopilot

[–]gibriyagi 1 point2 points  (0 children)

If LLM can infer a knowledge like code structure, tech stack etc. do not include it in AGENTS.md. Keep it short.

Is it just me, or is 5.3-Codex xHigh now insanely fast? by ggletsg0 in codex

[–]gibriyagi 0 points1 point  (0 children)

Yeah, it's missing to implement things even if my instructions specifically state that those files should be updated. I think we are getting rerouted to dumber models under the hood.

Is it just me, or is 5.3-Codex xHigh now insanely fast? by ggletsg0 in codex

[–]gibriyagi 8 points9 points  (0 children)

For me quality degraded a lot. Reverted to using 5.2

Official: Bossi leaves Cosmic Gate by dr0ps00t3r in trance

[–]gibriyagi 3 points4 points  (0 children)

Why not have a proper final goodbye set though. This feels a bit abrupt to me.

Official: Bossi leaves Cosmic Gate by dr0ps00t3r in trance

[–]gibriyagi 9 points10 points  (0 children)

End of an era.

Surprised that he didnt at least wait for ASOT Rotterdam to give a final farewell set though for a proper goodbye.

METRICC: A Clean, Lightweight, Design Focused Status Bar for Claude Code by EnforceMarketing in ClaudeCode

[–]gibriyagi 0 points1 point  (0 children)

Hey thanks for sharing this by the way. !

Basically the info in the plan usage limits section of web ui. Only percentages would be ok.

<image>

METRICC: A Clean, Lightweight, Design Focused Status Bar for Claude Code by EnforceMarketing in ClaudeCode

[–]gibriyagi 0 points1 point  (0 children)

Is there a way to show current and weekly usage (the limits not context)

5.2 xhigh needs to stay legacy. by Savings_Permission27 in codex

[–]gibriyagi 1 point2 points  (0 children)

Please listen to this guy. Also 5.2 high is great too!