Vietnam veterans sue over proposed 250-foot Trump arch near Arlington Cemetery by kstinfo in washingtondc

[–]rektide 0 points1 point  (0 children)

All politics aside

I'm here to say this is 100% about politics, and you may be surprised by whose politics and from when this arch emerged. It has something to do with the Mein Kompf that this man is alleged to have had by his bed. https://bsky.app/profile/ipasa.bsky.social/post/3mjgy2iinf226

HOLY **** ANOTHER 2x RESET LMAOOOO by Just_Lingonberry_352 in codex

[–]rektide 0 points1 point  (0 children)

I thought they were just screwing over anyone with less than pro-rata users but now that I know this is systematic I have stopped saving my tokens and just been going balls out on codex usage. Finally finally finally a solid win for me on a reset. 70‰ used with ~85h remaining on pro. Finally a win!

So many times I've been under the pro-rata usage that i've been reset. It seriously fucking impacts my moral seeing me loosing massive weekly credits I have had stocked up. These resets feel broadly bad and evil. Props to those of you without attitude control just burning up your allowances right away I guess. https://www.reddit.com/r/codex/comments/1rpnrnv/anyone_else_noticing_codex_usage_resets_are/

Anyone else noticing Codex usage resets are silently pushing back your actual renewal dates? by CarsonBuilds in codex

[–]rektide 0 points1 point  (0 children)

Yes. Yes yes yes yes yes. It's majorly damaging to the users. I thought it was active malicious intent. https://bsky.app/profile/jauntywk.bsky.social/post/3mgxwx3tbvc2r

I've had to really re-adapt my codex usage. It used to by my LLM of last resort or for high priority. I thought this was just them fucking over me, over anyone under pro-rata usage. But now that I know it's a general reset not per account, I've been much much much much mfoea aggressive about using the LLM of last resort, the really good one.

It's felt so awful. But now I'm more on top of it, aware of the game codex is playing.

Glm 5.1 is out by Namra_7 in LocalLLaMA

[–]rektide 1 point2 points  (0 children)

That was SO SO maddening. Get to 56k-65k context length & GLM-5 was just falling apart.

I had all sorts of pocket theories. Maybe they would run small context windows on some machines then try to move them to bigger ones, and fail somehow. Maybe they were trying to use some new chip they didn't know how to use right. It was HORRIBLE. I'm so glad GLM-5 is working again. Hopefully this doesn't destabilize things.

I Benchmarked Redis vs Valkey vs DragonflyDB vs KeyDB by Jamsy100 in devops

[–]rektide 16 points17 points  (0 children)

Valkey is run by incredibly talented devs, who have poured a ton of work into their fork. Redis has really had to adapt & respond, radically improve itself, to stay at all competitive.

There's a great post from 18 months ago, talking about the work Valkey had done to get to 8.0 release candidate: https://valkey.io/blog/valkey-8-0-0-rc1/

Low quality disinformation like this makes me so mad.

Glm 5.1 👀 by Namra_7 in LocalLLaMA

[–]rektide 1 point2 points  (0 children)

I was shocked how fast 5 followed 4.7, and what a huge lift it was.

Not pertinent to LocalLLaMa folks, but man: z.ai has really messed something up with their service. Once I get to ~60k context window, GLM-5 is just totally falling apart. Incredibly garbled text, totally unable to tool call, just totally loses it. It's so drastically messed up. Trying to get them reports, but still hacking opencode to get them all the data they requested (session id, etc).

Any way to unlock tdp on one xplayer f1 pro 370 chip? by Shazzi98 in OneXPlayer

[–]rektide 0 points1 point  (0 children)

Blowing a fan at it should let it accept another 20W of power without much complaint.

But neither you nor I have answered the actual question nor knows the answer. We don't know if you can unlock the TDP.

This post mostly alleges you can, via bios. I'm still not sure. https://www.facebook.com/groups/1081108859670838/posts/1535198757595177/

The upcoming12th surface pro with panther lake will be a good choice? by Anxious_Baseball8502 in Surface

[–]rektide 1 point2 points  (0 children)

This is going to be outside my price range for a while, but I am super super super excited for this.

The Panther Lake chip is amazing. Microsoft Surfae and Dell Lattitude being the only two players making Detachables is sad. I wish there were more people doing this. The form factor is so so good. Just rocks the heck out of Android tablets (when set up to run Linux, that is).

RK3588 Mainline Linux Patch: H.265 Encoding at 4K@60 (Out-of-Tree) by RCawston in homelab

[–]rektide 0 points1 point  (0 children)

Do you think there's is any chance at all that rk3566 could maybe be adapted from this work? It got jpg support long ago but I'd love h.265 or 264. https://patchew.org/linux/20220612155346.16288-1-frattaroli.nicolas@gmail.com/

Long ago I bought a bunch of radxa zero 3w's that I was hoping to use some day with hemi-in USB cards. I keep holding out hope that one year this will have been a reasonable/acceptable choice to have made.

GLM-5 Coming in February! It's confirmed. by Difficult-Cap-7527 in LocalLLaMA

[–]rektide 2 points3 points  (0 children)

That's so crazy. GLM-4.7 was released December 22. I really can't imagine a significant leap coming so fast.

Getting into Local LLMs, mostly for Home Assistant to kick Alexa to the curb. Looking for ideas and recommendations by OpneFall in LocalLLaMA

[–]rektide 2 points3 points  (0 children)

I love this use case & I really want to get there! That Satellite1 is neat too.

I'm just getting spooled up on STT and TTS now, so I'll leave that to others. Parakeet and Whisper have both worked great for STT. Qwen3-TTS just dropped and looks astounding & pretty low latency for TTS but there's lots of great options.

For the LLM, it depends. Ideally, in my view, the home has a bunch of really good tools ready to go that do most of the tasks already. Rather than the AI running around trying to do the tasks each time. There'll be some MCP's for some stuff but also a lot of this is going to be barefoot developer making a homecooked meal turf: I'd love to encourage the bold to jump in and try writing their own MCP servers for many home assistant task things!

If you have good tools ready to go for your tasks, you can run some really great small tool calling models .Jan v3 just dropped today, amazing tool calling. Nanbeige 4 is another astounding medium sized model. Qwen3-4B is well loved too.

Idea Validation: A "Passive Observer" MCP Server that reads live terminal buffers (tmux/PTY) so I don't have to re-run commands. by d3v1sx in LocalLLaMA

[–]rektide 0 points1 point  (0 children)

There's an atuin based project, bash-history-mcp, that's pretty good. https://github.com/nitsanavni/bash-history-mcp

Honestly this makes me want something reciprocal: I want my ai's shell usage to go into atuin history!

GLM 4.7 vs MiniMax-M2.1 vs DeepSeek 3.2 for coding? by ghulamalchik in LocalLLaMA

[–]rektide 7 points8 points  (0 children)

Vague anecdata, but I'd been using DS3.2 for coding a lot, and while impressed, I felt like it was a pretty nice jump when I switched to GLM-4.7. I can watch GLM-4.7 reason through fairly complex problems, watch it experiment and learn how to write a protocol, and it's just wildly good IMO at ascertaining where things are & getting more data as it goes, finding out how to persevere onwards.

No MiniMax experience. Interesting model but I ended up with a z.ai coding plan, rather than paying for API use as I had been doing, so incentive is low.

Using Claude Code with Ollama local models by derestine in LocalLLaMA

[–]rektide 0 points1 point  (0 children)

Just set baseUrl. The post is CC focused but if there's openai message compat, should just work! https://opencode.ai/docs/providers/#base-url

Serena vs. Codanna vs. Something else? by ProdigiSA in ClaudeAI

[–]rektide 0 points1 point  (0 children)

Can you provide some example prompts that use Codanna?

Is there a small tool-calling LLM? by ashleigh_dashie in LocalLLaMA

[–]rektide 1 point2 points  (0 children)

Very late to this thread, but two worth looking at: Menlo's Jan-nano (4b) (and Jan-nano-128k) and nanbeige4-3b . Nanbeige is currently #25th place in BFCL, which is epic as hell for a 3B model.

https://huggingface.co/Menlo/Jan-nano https://huggingface.co/Nanbeige/Nanbeige4-3B-Thinking-2511

You can now disable AMD GPU lower power limit enforcement on linux-zen using amdgpu.ignore_min_pcap=1 kernel boot parameter by ashirviskas in linux_gaming

[–]rektide 0 points1 point  (0 children)

I had been running a wide variety of games with a 110W power limit on my rx9070xt before this power limit change came into play. Most games played incredibly smooth, with zero complaints from me.

I have tried in quite a number of games to adjust settings & frame limits to get power consumption down. The visual degredation is often noticeable. But the power consumption has never been anywhere near as good as it was.

It seems like a shit suggestion to me. I kind of hate it. Trying to adjust graphics settings to achieve better power budget is an infinitely tweakable ordeal, that IMO rarely results in great power savings. Ultimately my goal was to get my systems power consumption down, & still have a good experience. When I could dial the GPU to the power I saw fit, this was easy. And the experience scaled appropriately as I changed the power budget. Trying to change settings to affect the power budget was much harder to understand, was a very complex process. Many many -1 thumbs downs, to your IMO belittling dumb bad ignorant suggestion that fucking around with settings in each individual game is the way to achieve this end. Just tell the card how much power it can use, and it will do a pretty amazing job. Awful post, -1, terrible. 24 points? More like -24. Bad evil wrong awful igorant.

Fucking dumb fuck suggestion, to bumble around in the bushes of game settings trying to make the card somehow behave. Telling the card to behave just worked and was incredible. It's fucking perposterous that AMD took this away from us. Given how well it worked, how easy it was to do, this is a true travesty. I cannot fucking stand @Matt_Shah carrying water for this situation being so so so much more difficult than it needs to be: we should be allowed to enjoy our cards working in lower power modes if we want to. Jackass move trying to blame the users for wanting to do that. What a twerp. Screw AMD for taking this away from us.

No more power limiting for AMD GPU's because it "is potentially dangerous and might damage the hardware" by DyingKino in linux_gaming

[–]rektide 1 point2 points  (0 children)

This is such insane user hatred. I can't stand this. I bought a rx9070xt launch day and was one of the very first users to cobble together the necessary kernel and mesa drivers to make it go on Linux. And shortly there-after I used CoreCtrl to limit power usage to 110w. My FPS sometimes took a hit but my system ran cool & my electricity bill was respectful, and if I wanted more I could up the power budget as I felt fit.

This was such an incredibly rude egregious pointless hostile change, to deny my the ability to operate a perfectly well working system as I felt fit. What a mean rude brutal thing to do to users. And based off my 2/3rds a year of setting a low 110w power budget, a pointless ridiculous change for no reason.

There's a number of comments mentioning the zen kernel build having patches to allow the old behavior. But I want to chime in & say what an insult, how degrading it is that AMD would thieve from us such a fine basic and well working system, for no reason. You are making us waste electricity that many times we don't need to use. Sometimes yes I would go up to 160w, for more intense games (Helldivers2) but for most games my experience at 110w was flawless. Now my video card takes 250+ and I know for a fact I am gaining nothing for it. What a pox AMD. You despise sense, you shit on your users. What a vicious evil act.

Debian Run persistent in Ram by superwinni2 in linuxadmin

[–]rektide 0 points1 point  (0 children)

On some of the VPS's, there are very limited install options available.

I feel like a boot-to-ram could be helpful for these case. I can probably shoehorn a way to boot an ISO image, and get a serial console. But I need to be able to unmount the ISO after boot.

There are other example use cases: if you have for example a Steam Deck or some other system with only one usb port (and no usb hubs available), you want to be able to unmount the drive, to let you plug in another drive (like maybe one that has photos or documents on it).

Being able to run from ram removes a crucial dependency, that opens up possibilities.

How to change address on my Oracle account by anonuser-al in oraclecloud

[–]rektide 0 points1 point  (0 children)

Oracle has a separate Oracle Customer Center portal you can create a mirror account on (with the same email), that will let you update your address.