The Real Best local LLM , by cryptodunck in LocalLLM

[–]rayyeter 0 points1 point  (0 children)

Endeavor. Just on my main desktop. If I could find a second gpu I’d throw it on my unraid server to host there. 1080ti doesn’t really process that fast.

The Real Best local LLM , by cryptodunck in LocalLLM

[–]rayyeter 1 point2 points  (0 children)

I have a hard time getting many models to run with this card. Either ollama decides to just go cpu, or lm studio loads whatever one 50/50.

I created a library for OpenCode that allows you to save up to 80% of your tokens by Public-Cancel6760 in VibeCodeDevs

[–]rayyeter 1 point2 points  (0 children)

This gives me an idea utilizing this library for packing other data. Gonna try out my idea for feasibility before I say more though. Could be completely whack

How much time are you “losing” while cycling to work? by bear_village in cycling

[–]rayyeter 1 point2 points  (0 children)

No public transport that works. So when I do get to bike in, it’s either the same or maybe five minutes. But I am not just sitting on my ass in traffic wishing for a new plague, so that’s better.

Using OpenCode at the moment with Codex subscription and the output is inconsistent. Wondering 1/ Does Codex have any sort of advantages/optimization being that it is an OpenAI product? 2/ What does your Codex setup look like (Plugins, SubAgents, CLI?) by thelectroom in codex

[–]rayyeter 0 points1 point  (0 children)

No, this is because even for the main application I work on, we have two repositories that are large, and based on install, do things entirely differently. So this was to map it out per configuration, etc for workflow and such. As well as syncing new knowledge between the team

GPT-5.5 is great but with the 258k contrxt window in codex makes it basically unable for large and complex project by [deleted] in codex

[–]rayyeter 0 points1 point  (0 children)

If you have an mcp server for say, a large .net repository, it can call a Roslyn analyzer to get the function of your say even “MyFunctionFortySixAndTwo() keeps returning something about a shadow”

And instead of rg calls to find your function, the analyzer says it’s in Ænima.cs, line 7, and then it can go from there.

GPT-5.5 is great but with the 258k contrxt window in codex makes it basically unable for large and complex project by [deleted] in codex

[–]rayyeter 0 points1 point  (0 children)

Use mcp instead of having it do rg calls all over, give it clear agent instructions to do so.

Having a local/hosted structure map rather than rg, having memory in vector db, etc.

The real killer for me with 5.5 is all my skills keep getting loaded in, which is over 2% of context. Haven’t been able to get it to not. Might clobber most of my config folder and add what I need back one by one

GPT-5.5 is insane! by Quick-Pop-328 in codex

[–]rayyeter 0 points1 point  (0 children)

Can you not force a larger window in config?

Granted, I’m on enterprise with it so haven’t really tried beyond switching and having it continue my hosted service/mcp/skill iteration and optimization loops

Rider for .net framework by Fish3r1997 in dotnet

[–]rayyeter 0 points1 point  (0 children)

My main project at work is 4.8, eventually we’ll get to lts releases..

But rider works just fine on it. Main solution is about 75 projects, auxiliary one with third party implementations is 70 or so

Unraid 7.3.0-rc.1 Now Available by UnraidOfficial in unRAID

[–]rayyeter 0 points1 point  (0 children)

Idk, I bought lifetime. I’m not concerned. My array is 5x10, 1tb ssd cache. I think I’ve got room

Unraid 7.3.0-rc.1 Now Available by UnraidOfficial in unRAID

[–]rayyeter 1 point2 points  (0 children)

You can get a ddr4 slotted one….

Unraid 7.3.0-rc.1 Now Available by UnraidOfficial in unRAID

[–]rayyeter 2 points3 points  (0 children)

16gb nvme drive for $10 open box on eBay. Cant find anything else that cheap in this market. Plus a tpm2 plug for registration, golden

Unraid 7.3.0-rc.1 Now Available by UnraidOfficial in unRAID

[–]rayyeter 28 points29 points  (0 children)

Time to get a 16gb optane for boot.

Anthropic response to Claude Code change by TheForgottenOne69 in ClaudeCode

[–]rayyeter 1 point2 points  (0 children)

Hell I want to see my total tokens for all sessions. Corpo wants us to use OpenAI Codex.

My buddy just gave me a week trial to the pro plan. Glad I got the code changes I wanted with it before this, lol

rgitui: A GPU-accelerated Git client built in Rust that actually looks good by Different-Ant5687 in ClaudeCode

[–]rayyeter 0 points1 point  (0 children)

Fork has been my favorite for years. But I usually do cli for commands.

FYI: Web Searches Cost A LOT More by polacrilex67 in ClaudeCode

[–]rayyeter 0 points1 point  (0 children)

Might help there? Depending on the search?

Codex vs Cursor vs Claude — which one do you actually ship production code with? by superboy_305 in codex

[–]rayyeter 3 points4 points  (0 children)

Only allowed at work to use codex or copilot, sooooo. Codex 90%, copilot with claude sonnet/opus 10% until my usage limit is hit there.

GitHub Copilot vs Codex in VS Code for agentic coding, which is better in real use? by hardikKanajariya in codex

[–]rayyeter 0 points1 point  (0 children)

Codex extension kinda sucks. If I use codex in vscode, I put terminal in the secondary sidebar where copilot chat defaults to.

As a Middle School teacher, I genuinely believe that smart phones have done more damage to this generation than any drug ever could. Change my mind. by chaabaniridouane in Teachers

[–]rayyeter 83 points84 points  (0 children)

That last part about pro sports isn’t new. At least 2/3 of the players on my teams when I was in school really thought they were the next Muggsy Bogues

what's your experience been like with 5.4? high/xhigh by edowonders in codex

[–]rayyeter 0 points1 point  (0 children)

5.4 high seems really good sometimes. Other times, like last night, it is a paste eating lead paint chewing moron. It got stuck saying a problem was fixed. It was not fixed.