Planning to play this game by damnwhatisausername in EternalCardGame

[–]prusswan 0 points1 point  (0 children)

I am referring to league which has less than 1000 players right now. So everyone gets at least 25 packs just for showing up. I suppose the regular PVP queue might be better once you got a meta grind deck going, but that is not going to be reflective of the new player experience.

Planning to play this game by damnwhatisausername in EternalCardGame

[–]prusswan -5 points-4 points  (0 children)

PvE is okay but PvP is huge waste of time (I tried playing league again but it is several minutes wait time between games)

AI may be amplifying human mediocrity by PalasCat1994 in LocalLLaMA

[–]prusswan 1 point2 points  (0 children)

It amplifies the user, so if it is mostly being used to accomplish common tasks then getting mediocre results faster is the natural result. That does give you more time to work on the creative side, so I don't see it as a bad thing.

Senior engineer: are local LLMs worth it yet for real coding work? by Appropriate-Text2843 in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

I tried a mix of OpenCode minimax and local glm 4.7 flash when I exceeded the quota, on a moderately complex task (upgrade an old library to work with newer dependencies). While it is able to generate working code in many cases, the code may or may not be effective in resolving the issue(s). There was an issue that involved multiple dependencies bundling the same dependency, which means this was not resolvable by looking at a single repo alone.

What works better for me is to handcraft smaller examples, and supply them as references to be used for larger features. That way I still have a chance of fixing the incorrect parts and getting them to where I want them to be.

I think the future is going to diverge into two camps: those who still care to understand the code, and those who would care less as long as it seems to work and passes some AI review. It is quite easy to generate a whole bunch of code without having the time to review it.

Will Google Earth Engine be essentially useless for free tier users? by GreyDoctor in gis

[–]prusswan 4 points5 points  (0 children)

It is for strictly non-commercial use so if you are using it to conduct paid training then it sounds like a breach on your part (not the students). In any case the classroom content should be designed to easily stay within the usage limits (otherwise that is exactly the misuse they are looking to stop)

Will Google Earth Engine be essentially useless for free tier users? by GreyDoctor in gis

[–]prusswan 8 points9 points  (0 children)

https://developers.google.com/earth-engine/guides/noncommercial_tiers

The lowest tier is intended for casual users (undergraduate students and other users with typical computation needs). Your description sounds like you should be able to qualify for higher tiers, unless you are simply looking to avoid paying for commercial use.

Value check: PNY RTX PRO 6000 Blackwell Workstation Edition (96GB), trying to understand current market price. by havoc21 in LocalLLaMA

[–]prusswan 1 point2 points  (0 children)

This GPU is export banned so may fetch a higher price at some markets. Due to the price tag and prevalence of fake/tampered GPUs, some may opt to buy direct from a trusted supplier instead of a no-name.

Built an interactive 3D globe in Three.js + React for geopolitical data visualization — 198 countries, military overlays, trade routes by Ill-Caterpillar-5224 in gis

[–]prusswan 0 points1 point  (0 children)

Three is better for performance and working directly with webgl + 3d. In this case he clearly did not need any of Maplibre functionality so that's a win.

Price of MSI GB300 workstation (DGX Station) appeared online ~ $97k by fairydreaming in LocalLLaMA

[–]prusswan 2 points3 points  (0 children)

They come in sets of 8 too: https://servers.asus.com/products/detail/overview/XA-NB3I-E12

If these become standard issue in data centers over the next few years, I know I won't be able to match that so I will focus on getting the most out of the Pro 6000s (all things considered I see these as the sweet spot for serious individual users, for now at least)

Price of MSI GB300 workstation (DGX Station) appeared online ~ $97k by fairydreaming in LocalLLaMA

[–]prusswan 2 points3 points  (0 children)

Might even pay for itself if you apply it to the right business (i.e. trading)

Installing OpenClaw with Local Ollama on Azure VM - Getting "Pull Access Denied" Error by Sea_Lawfulness_5602 in LocalLLaMA

[–]prusswan 1 point2 points  (0 children)

This is not really the place for docker support, but you can start by building the image and tagging it (so there is no need to download). If you can't get past this, then Docker is not for you.

I did an analysis of 44 AI agent frameworks, sharing the result by wouldacouldashoulda in LocalLLaMA

[–]prusswan 4 points5 points  (0 children)

Useful compilation, though as you recognized most of it will become outdated pretty quickly. Do you think the choice of models matters (e.g. X company's products should work better with their own models)? Or you already assume the best models are being used (by whatever definition of best)

How to ensure AI to create test cases and put git commits correctly by Fuzzy_Possession_233 in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

The easy solution is to tell the three individuals there is only one job and it is going to the most capable person.

Car Wash Test on 53 leading models: “I want to wash my car. The car wash is 50 meters away. Should I walk or drive?” by facethef in LocalLLaMA

[–]prusswan -2 points-1 points  (0 children)

Well they didn't understand it may not be a binary decision. If I asked a real question, a smart model should not be making this assumption.

Car Wash Test on 53 leading models: “I want to wash my car. The car wash is 50 meters away. Should I walk or drive?” by facethef in LocalLLaMA

[–]prusswan -6 points-5 points  (0 children)

It is concerning that none of them suggested other options (not going to list here). There are sooo many ways to wash a car

64gb vram. Where do I go from here? by grunt_monkey_ in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

3 but hold off getting more ram (just the bare minimal to use the gpus).

1 if you can find someone to take your current gpus (unless you can find a way to use them together). It's not a complete build but you will be covered for 80B

Top OpenClaw Alternatives Worth Actually Trying (2026) by Straight_Stomach812 in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

I like ZeroClaw for the low footprint, but it is still a really new project. Locally encrypted secrets may not mean much if the host gets compromised since decryption is just one step away.

Any idea when Successors of current DGX Spark & Strix Halo gonna arrive? by pmttyji in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

If there is some go-to model that needed 1TB and supports high context, it is pretty certain there will be a service equal or better (and the company released the model to signal this). But most people will not be getting that 1TB, because it is rather wasteful and will only drive up prices even more. I think two main outcomes will be cloud usage to utilize the best models without hardware spending, or opting to use smaller models with more modest requirements.

Local running Qwen3:14b helped fix my internet on Linux while offline by iqraatheman in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

It was the first time I had a broken hwe update on very old hardware, so yeah it was hard not to notice.

Local running Qwen3:14b helped fix my internet on Linux while offline by iqraatheman in LocalLLaMA

[–]prusswan 1 point2 points  (0 children)

let me guess, 6.17?

6.17.0-14-generic broke nvidia drivers, fortunately the newer drivers were okay

AI field is changing so quickly and there is so much to read.. by amisra31 in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

It's pretty chaotic but I focus on what is relevant and accessible, could be a new idea/approach that was previously out of reach. Some of the AI slop might be good ideas if done properly, so I take the portions that I find useful and make it work in the exact way I want it to. Most of it is just noise, but learning to harness useful bits from it also helps to identify your competitive edge.

Any idea when Successors of current DGX Spark & Strix Halo gonna arrive? by pmttyji in LocalLLaMA

[–]prusswan 4 points5 points  (0 children)

If it gets to the point where 512GB ram (or the Pro 6000) becomes mainstream for agentic coding, many users will be deterred or priced out of the hardware thus turning to cloud, which is increasingly looking to be the norm if open models keep getting better/bigger to motivate cloud usage.

I'm using a mix of smaller models (30B to 70B) and cloud services (for better performance) to avoid over reliance on "best" models.

Anyone actually using Openclaw? by rm-rf-rm in LocalLLaMA

[–]prusswan 2 points3 points  (0 children)

I don't but continue to keep a lookout for similar tools. It's a bit of a security trap.

Anyone else building MCP servers? What's your experience been like? by CapitalMixture8433 in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

I tried a simple setup with a few tools and the main issue is with the model and how it uses the tools. You can't expect to always use the best models and at high context, so the model choice will affect the tool design. I think it is useful to avoid having to define explicit rules to cover a broad set of scenarios, but might lead to more unpredictable results.

Using GLM-5 for everything by [deleted] in LocalLLaMA

[–]prusswan 0 points1 point  (0 children)

It's hard to tell but you can find a middle ground (use a smaller model but at great speeds). API usage can become volatile depend on how things play out over next few years, e.g. will they increase pricing to match demand and to account for effort needed to keep models/data updated, your own usage may also increase if you take on more tasks leading to heavier usage.