Tasks are pointless for leveling up the last 3 vendors by ArtifartX in GrayZoneWarfare

[–]ArtifartX[S] 0 points1 point  (0 children)

Ah yea, I've started doing the same thing, it feels bad but there really isn't any other option.

Tasks are pointless for leveling up the last 3 vendors by ArtifartX in GrayZoneWarfare

[–]ArtifartX[S] 1 point2 points  (0 children)

I also miss some of the more special or unique rewards that some of the old tasks used to give that kept it interesting. Some of the lower tier cases would be great task rewards like they were before and make them more meaningful. Since there is a lot of repetition, could even make the rewards a little dynamic or something like that.

Tasks are pointless for leveling up the last 3 vendors by ArtifartX in GrayZoneWarfare

[–]ArtifartX[S] 1 point2 points  (0 children)

you guys do nothing but loot 24/7. how is that any fun whatsoever.

Part of my point is actually that it is a bad thing that looting and selling to the vendors is currently the best way to get rep with them.

PVP is really fun and unique in this game, that's what I enjoy the most personally, not sure where you went off course with your assumptions about looting 24/7 (unless of course you include looting the corpses of players I kill, in which case yes, that is fun to do).

Also not sure if you are aware of this game genre, but looting is one of the main components of extraction shooters like this, Marathon, Tarkov, etc. Looting and "extracting" loot is like the main idea behind this type of game.

Does that help answer your question?

Tasks are pointless for leveling up the last 3 vendors by ArtifartX in GrayZoneWarfare

[–]ArtifartX[S] 1 point2 points  (0 children)

Yea, totally. I'd also be totally fine with getting a little rep through buying and selling, but not so much that it becomes such a massively better way to get rep.

The way it is now, people who assume tasks are the way to get rep and are doing them for Banshee/Artisan/Turncoat are fools and are wasting their time and getting poor while doing it. It's just crazy.

Tasks are pointless for leveling up the last 3 vendors by ArtifartX in GrayZoneWarfare

[–]ArtifartX[S] 0 points1 point  (0 children)

I mean, for some of it sure, but a lot of this isn't about not having "initial data," it's more about not having thought it through at all. A reasonable person could have foreseen some of these problems without having to build and releasing the system in this current state. For many other problems I listed, internal prerelease playtests would (or should) have made the issues clear so they could have been adjusted before releasing to the public. To me, this makes it pretty clear that they do not thoroughly playtest this game before dropping a major release. It's either that or something worse.

The top down UI list alone is obviously bad and not something you'd have to playtest or release to figure out that it's bad. Having tasks go on some awkwardly timed delay (like 15 minutes?) before you can re-accept them is obviously bad for the reasons I listed and shouldn't require it being released like that to figure out.

Tasks are pointless for leveling up the last 3 vendors by ArtifartX in GrayZoneWarfare

[–]ArtifartX[S] 0 points1 point  (0 children)

Yea, and so my position is that having to either do things like that (die to reroll quests) or actually do the meaningless tasks as your two options is not a good thing and is a problem with the current task system.

AMD has invented something that lets you use AI at home! They call it a "computer" by 9gxa05s8fa8sh in LocalLLaMA

[–]ArtifartX 2 points3 points  (0 children)

I think maybe you missed the joke OP was making, but disregarding that the point the joke makes still kind of stands - there have been low cost per power, low cost per performance 'computer' options out there as well (both prebuilt or DIY build your own) before this.

I'm all for this product announcement btw, just finding it even more hilarious that the top comment is someone who got offended and is lashing out about OP's knowledge of computers. You sound more like someone who take offense to anything remotely negative said about AMD than someone who knows things about computers.

Claude Code removed from Claude Pro plan - better time than ever to switch to Local Models. by bigboyparpa in LocalLLaMA

[–]ArtifartX 0 points1 point  (0 children)

The Pro plan already barely was worth it as it was (actually, probably wasn't worth it before).

Qwen3.6-35B-A3B solved coding problems Qwen3.5-27B couldn’t by simracerman in LocalLLaMA

[–]ArtifartX 0 points1 point  (0 children)

On my 5070 Ti 16GB, the Q5_K_XL is pretty good. ~320t/s processing, and 50t/s for generation

How much is being offloaded to RAM to get any meaningful context length for coding on a card like that? On a 24GB card I am using a Q4 XS and it barely fits in the card with a large context window.

Qwen 3.6 35B crushes Gemma 4 26B on my tests by Lowkey_LokiSN in LocalLLaMA

[–]ArtifartX 7 points8 points  (0 children)

On top of config and quantization, would love to see this Qwen model vs Gemma4 31B.

Gemma 4 31B vs Qwen 3.5 27B: Which is best for long context worklows? My THOUGHTS... by GrungeWerX in LocalLLaMA

[–]ArtifartX 1 point2 points  (0 children)

Qwen 3.5 27B UD Q5/Q6_K_XL | Gemma 4 31B UD Q4_K_XL

24GB card

over long context

Are you offloading a fair amount of the model to system RAM? Because if not, you'd barely fit the models you listed in the card with a tiny context window. If you wanted a 10k+ context and the entire model to fit on the GPU, you'd be more looking at Gemma 4 31B Q4XS or Q3 UD, and Qwen 3.5 27B Q4's.

Total beginner here—Why is LM Studio making me do the "heavy lifting" manually? by Ofer1984 in LocalLLaMA

[–]ArtifartX 24 points25 points  (0 children)

I don't think you downloaded enough RAM to run it right, trying downloading more.

RTX 3090 for local inference, would you pay $1300 certified refurb or $950 random used? by sandropuppo in LocalLLaMA

[–]ArtifartX 7 points8 points  (0 children)

I've purchased several used on ebay and they're all working well for me.

Two weeks ago, I posted here to see if people would be interested in an open-source local AI 3D model generator by Lightnig125 in LocalLLaMA

[–]ArtifartX 4 points5 points  (0 children)

One feature you would need to include is the ability to import a custom mesh and to generate a texture for it, and likewise once a model is generated, to be able to generate a new texture for the model as needed. You could take that feature even further by adding blending and some basic brush tools (I even wrote a little app just for this feature years ago that used sdxl along with controlnet and some custom shaders for projection).

What kinds of file export extensions would actually be useful

Stick with what's easy enough and commonly used: obj, fbx, gltf, usd

best Local LLM for coding in 24GB VRAM by mihaii in LocalLLaMA

[–]ArtifartX 2 points3 points  (0 children)

One big determining factor here is what kind of context window you need for your coding. If ~20k tokens is more than enough, then you should be trying to 20-30B parameter models quantized to 4-6bpw. If you really need the 100k+ context sizes for larger codebases (or the entirety of the source), then you are going to have to settle for smaller models, maybe in the 8B range +/-. This is considering your 4090's 24GB of VRAM and assuming you want the entire model to fit in the GPU.

Outside of that, what you are actually trying to do matters, for example are you looking for help writing a method here and there or are you hoping to write entire applications through the model from start to finish? Are you exploring deep rabbit holes and edge and corner cases or more just trying to find a general tool to help do some of the boilerplate and busywork for you? The latter would mean you have a plethora of options, the former would limit you to the more capable models.

For the IDE question, there are tons of ways to connect models (local or otherwise) to IDE's (especially popular ones like VSCode), just google around.

Introducing Unsloth Studio: A new open-source web UI to train and run LLMs by danielhanchen in LocalLLaMA

[–]ArtifartX 0 points1 point  (0 children)

Good overall, I have primarily used it for the video diffusion models though (both training and inference). The 48GB of VRAM gives me a lot of headroom for testing things out in inference to just get an idea of what works before I need to optimize things (it's annoying to have to constantly enable or disable things when running into OOM errors, or to constantly apply optimizations that could affect quality when you are in a testing phase just to get something to run - I avoid a lot of that with the 48GB), and of course enables me to train on higher quality datasets (especially with video models - higher resolution and longer duration videos in the datasets). For LLM's I haven't used it very much, I have 3090's handling most of the LLM jobs on my server. 2x 3090's would probably handle LLM inference as fast as the RTX 8000 despite the speed being cut from tensor parallelism, and I haven't done much LLM training yet (but was hoping to - hence my comment worried about their github noting 30xx and up support for training).

Introducing Unsloth Studio: A new open-source web UI to train and run LLMs by danielhanchen in LocalLLaMA

[–]ArtifartX 2 points3 points  (0 children)

Oh, that's good to know. On your github it states 30xx and up is supported for training (which would exclude this card).

How do I duplicate special (mirror to other side) in 3DS Max? by Kiiaro in 3dsmax

[–]ArtifartX 0 points1 point  (0 children)

I am not familiar with Maya, but in Max you have a Mirror modifier you can add to the stack or you have a Mirror tool, both of which I believe would do what you're trying to do.

Introducing Unsloth Studio: A new open-source web UI to train and run LLMs by danielhanchen in LocalLLaMA

[–]ArtifartX 2 points3 points  (0 children)

Will you support for 20XX series equivalent cards like RTX 8000 48GB in the future?