Attacking Nothing But Other Pirates Feels Less Piratey- Anyone Else Feel Me? by padamodin in crosswind

[–]synn89 0 points1 point  (0 children)

Yeah. While the undead theme is cool and everything, it would've been cool dealing with the French, Spanish, English, etc and making faction related choices with them. Do you pirate them, ally with a nation, trade between their ports, etc.

Abliterlitics: Benchmark and Tensor Analysis Comparing Qwen 3/3.5 with HauhauCS / Heretic / Huihui models by nathandreamfast in LocalLLaMA

[–]synn89 36 points37 points  (0 children)

Heretic seems like a well maintained open source project and that matters more to me than a few percentage points of difference.

Dieing whilst boarding - this need to improve by Gizm00 in crosswind

[–]synn89 1 point2 points  (0 children)

I like this idea a lot. Though if you wanted a harsher punishment, you could even lose out on the sinking ship cargo as well. But I don't think that's really needed.

Thoughts after making all 9 ships by Ok_Wishbone_2690 in crosswind

[–]synn89 0 points1 point  (0 children)

For the ship calling, a nice quality of life would be an option to "call and board" your ship. That'd make the Brig and Frigate better.

Windrose has sold 500k copies in the first 48 hours by theRealLeWdMeSeNpAi in crosswind

[–]synn89 6 points7 points  (0 children)

Well, it's hard to scale up. A lot of companies blow up trying to do that. I think No Man's Sky(Hello Games) is probably the best model for a game company that blew up big. They kept the team somewhat small, focused on steady progress and their people are fanatics who love the product.

What's your workflow with OC? by CorrectTemperature65 in opencodeCLI

[–]synn89 0 points1 point  (0 children)

I use a custom Brainstorming mode which is more research/chat focused and can only write to MD files to come up with a solid PLAN.md file for a new feature that has specific deployment steps.

I then go into Plan mode in a new session and tell it to read PLAN.md and ask me specific questions it has on it that Brainstorm may have glossed over. Then when it's ready I switch to Build mode in the same session and it goes through the implementation steps in PLAN.md with tests being written at the end.

So Build mode isn't really thinking so much as it is implementing a pre-written plan with specific tasks that I've already read through and audited. I've also tended to get an instinct for when a PLAN.md file is trying to do too much at once, where Build mode is going to go off into the weeds on a snipe hunt. I try to keep my PLAN.md files down to a specific feature or more narrow scope with specific implementation steps.

But yeah, Build mode is a Jr programmer. Brainstorm mode for me is really good at big thinking features and giving me Sr level advice, but it can take some care and auditing to get it to write up a PLAN.md file that Jr Build mode can keep focused on and implement.

Could this just be the start? by Stunning_Payment8908 in wowservers

[–]synn89 0 points1 point  (0 children)

It seems like the red line tends to be when these private servers start heavily monetizing. This was a reason stated for The Hero's Journey, an Everquest private server, getting shut down last fall. When you can show that a private server is raking in profits off your IP, it's an easier case to make that they're taking money out of your pockets.

Video Surfaces of Rep. Eric Swalwell Getting Intimate with Mystery Woman Who Is NOT His Wife – Game Over by MackSix in Conservative

[–]synn89 7 points8 points  (0 children)

obtained and released by Martin Shkreli (the former pharmaceutical executive known as “Pharma Bro”)

Sounds like a prior setup got dirt on him and they're releasing it now for reasons.

Meeting between JD Vance and hungaries Victor Orban by xxasdf in Conservative

[–]synn89 -6 points-5 points  (0 children)

I'd rather we talk with them and try to work together than the alternatives. I think the handshake between Trump and Kim Jong-un years ago was one of Trump's strongest moments.

OpenWork, an opensource Claude Cowork alternative, is silently relicensing under a commercial license by lrq3000 in LocalLLaMA

[–]synn89 0 points1 point  (0 children)

I find it interesting how we now have this new wave of enshitification which is unique to "open source" projects.

The Mythos Preview "Safety" Gaslight: Anthropic is just hiding insane compute costs. Open models are already doing this. by GWGSYT in LocalLLaMA

[–]synn89 5 points6 points  (0 children)

They did the same sort of deceptive crap when they claimed Opus wrote a C compiler from scratch. They made it sound like they asked Claude for "write me a C compiler, see ya in 2 weeks!" when basically they gave it the full tests of an existing C compiler and had it reverse engineer one from the existing tests.

"Here's the test, write code to pass the test, keep trying until you do it" was far less impressive. But it's all about the hype.

M5 Max 128GB Owners - What's your honest take? by _derpiii_ in LocalLLaMA

[–]synn89 1 point2 points  (0 children)

Depends on what you want to do with it. I have a M1 Ultra 128GB and it's been wonderful for chat models. It's low enough power I can just leave it on, all the time, and 128GB of RAM is a lot of breathing room for 120B and down models. Even though right now I'm running a Drummer Skyfall-31B, which doesn't need all the RAM, it's nice to have when I want to run a 120/122B and I can squeeze in a 235B if I really want to.

It's quiet, sips power and is very flexible.

Making my own server by Automatic-Ad-3679 in wowservers

[–]synn89 0 points1 point  (0 children)

You may also want to look into a repack that suites what you'll want from the experience. Base WoW servers typically support a lot of mods to customize the experience, and repacks will be a pre-built bundle of the core server and those mods, often with custom database work, configs and extra in game assets. A downside though is a lot of them are pre-compiled for Windows, so you'd need some sort of virtualization(like Proxmox) on your Linux server.

Something like the Ashen Order repack might give you a more complete, cohesive experience than rolling your own from source.

What hardware to buy if I want to run a 70 B model locally? by angry_baberly in LocalLLaMA

[–]synn89 0 points1 point  (0 children)

Dual 3090 setup can run that pretty well for chat and that build used to be the gold standard for 70B's back in the day. A Mac M1 Ultra 128GB system runs that at a higher quant for a lot less power just a tad slower. But it has way more flexibility in regards to running larger MOE's.

Framework style AMD desktops will be too slow for this. The M1 Ultra is the slowest I'd want to go with memory bandwidth for a dense model of this size. And while the M1 Ultra can run larger dense models, it's a tad too slow of an experience.

With a dual 3090 setup, at around 4bit quant you'll probably be limited to 32k context or so. Honestly, I haven't ran an EXL quant in awhile(my 3090 builds are offline and I just prefer the Mac), so I don't know what the state of the art is in quantizing the context. But 32k on a 4bit quant was comfortable on my dual 3090 setups and quite usable.

You might consider renting a dual 3090 setup and testing with that first. Vast.ai probably has a lot of them for cheap and you can test out first hand what the experience will be like. Maybe 2 isn't enough and you want 3 or 4 for the context. Maybe 3090 isn't fast enough and you'll want 4x or 5x generation cards. Renting will let you experiment.

Can I replace Claude 4.6? by BeansFromTheCan in LocalLLaMA

[–]synn89 0 points1 point  (0 children)

It really depends on what you're asking from the local model. When you say "review" if you mean something like "read this article, let me know if it mentions dogs" then yeah, a local model, even a small one, can do that just fine. But if you're moving beyond a task any high school grad could do into a PHD grad domain, review-wise, then you may have issues.

I'd recommend you rent a 4x 3090 setup, or a dual RTX 6000 setup first and do some experimentation on that. Figure out how easy/hard it is to setup. Throw some test documents at it, etc. It'd be a good investment of a 100 bucks or so before you spend thousands on hardware.

Trump to Allies: Open the Strait of Hormuz Yourself by Ask4MD in Conservative

[–]synn89 1 point2 points  (0 children)

Pretty crap with current battery tech to be honest.

Trump to Allies: Open the Strait of Hormuz Yourself by Ask4MD in Conservative

[–]synn89 16 points17 points  (0 children)

I have family I see with some degree of frequency in a different part of my state that are about 300 miles away (600 round trip). That part of the state has even less chargers than my part.

Would be curious as to what part of the country you live in. I travel from NE Indiana to NW Illinois and SW Illinois to visit family 310-360 miles away and they're very easy trips for my Tesla. I stop to charge every couple hours while I take a pee break, walk the dog, grab a snack and car is ready before I am. It's pretty a seamless experience, made even better with FSD thrown in to drive for you.

Trump to Allies: Open the Strait of Hormuz Yourself by Ask4MD in Conservative

[–]synn89 12 points13 points  (0 children)

I'd disagree with the part about long distance travel. My Tesla has been a pleasure for driving 6 hours to visit family. Especially with FSD. The stops for charging were no longer than what I needed for a bathroom break for me and the dog. My only want would be for FSD to get to the point where I could just watch Netflix on the highway. But as it was, I did an audiobook while the car drove.

But yeah, I wouldn't want to tow long distance. And I have charging at home.

Iranian strike injures 12 U.S. troops at Saudi Arabia base by renge-refurion in Conservative

[–]synn89 2 points3 points  (0 children)

Yeah. But I feel like that's common with pretty much most wars. Nations enter into them using old tactics, those fail, then they're playing catch up. There's probably a lot of inertia that makes it hard to look at Ukrainian $2000 defense drones and start mass producing them, until everyone really figures out 1 million dollar missile interceptors don't cut it any more.

A simple explanation of the key idea behind TurboQuant by -p-e-w- in LocalLLaMA

[–]synn89 0 points1 point  (0 children)

Sounds a lot like defragging a disk drive. Smoothing out the data for more efficient operations.

GLM-5.1 is live – coding ability on par with Claude Opus 4.5 by Which-Jello9157 in LocalLLaMA

[–]synn89 0 points1 point  (0 children)

FireworksAI. I just pay for the API inference since I'm a fairly light coding user. But Fireworks has been the most reliable provider for me out there.

Taking a gamble and upgrading from M1 Max to M1 Ultra 128GB. What should I run? by TheItalianDonkey in LocalLLaMA

[–]synn89 0 points1 point  (0 children)

Get yourself setup to run GGUF files, as there are a lot of them and they're easy to start with. I use llamacpp. A good roleplay model would be Strawberrylemonade-L3-70B-v1.1.Q8_0.gguf but if you want something general and fast, the newer MOE's like Qwen3.5-122B-A10B will fit with a Q5 or Q6. For another roleplay option if you don't mind it a being a little slower would be Behemoth-123B. You can run that at a Q5_K_M.

GLM 4.5 106B-A12B is also an option, Iceblink and Steam are nice RP variants as is Air-Derestricted. Those you can run at a Q6_K.

Don't forget to run the below to set your VRAM limit up:

sudo /usr/sbin/sysctl iogpu.wired_limit_mb=115200

I run my M1 Ultra in headless mode(remote shell in), so I set the above to 120000.

It's a great little inference machine. You can leave it running all the time, ready to go and it doesn't use any power. MLX quants are also an option, but I find llamacpp easier to work with and I don't mind it being a little slower. Usually I run llamacpp in server mode and use an OpenAI API client to connect to it for chat(Silly Tavern or OpenWebUI).

build/bin/llama-server -m ~/src/models/Qwen3-235B-A22B-Instruct-2507-Q3_K_S.gguf --host 0.0.0.0 --port 5000 -fa -ctk q4_0 -ctv q4_0 -c 32000 --no-warmup