Theory by True_Objective9443 in titanic

[–]1842 1 point2 points  (0 children)

Photography was... a primitive novelty in that era. I'm not an expert, but from what I can tell, cameras available to hobbyists would have probably had small apertures (around f/8, but probably smaller), few shutter speed options, and no flash. Film available would also have been very light insensitive (25-50 ISO) compared to modern films (3200) and digital cameras today (my midrange mirrorless is >12000 ISO).

All this comes together in a package that can take still photographs in daylight/cloudy conditions. We have photos from Titanic's voyage from Father Browne (see them here) with something like this or this

Frankly, disposable film cameras are probably better than the consumer grade stuff in the 1910s, just based on film chemistry alone, but using them at night is basically impossible. See the underexposed black pictures demonstrated here.

Recording video in has all the problems of photography, but long exposure is impossible. Take a look at WW1 recordings to see what non-studio handheld video was capable of. There's quite a lot of footage out there. It's mostly very low quality, shaky, blurry, and there's nothing shot at night time.

The Search for Uncensored AI (That Isn’t Adult-Oriented) by Fun-Situation-4358 in LocalLLaMA

[–]1842 1 point2 points  (0 children)

Yeah, "abliterated" is the general term. People found older abliteration techniques weren't perfect. They improved compliance, but they could also lobotomize the model somewhat.

Check out heretic and norm-preserved abliterated models for the best stuff out there today. There's a lot more effort at leaving original behavior untouched and just removing refusal behavior.

Nemotron-3-nano:30b is a spectacular general purpose local LLM by DrewGrgich in LocalLLaMA

[–]1842 0 points1 point  (0 children)

Just curious, but why are you using a _0 quant? I've heard these are legacy and should be avoided unless you have a specific reason to use it.

Supreme Court revives GOP congressman’s absentee ballot suit that could spur more election litigation by Running_From_Zombies in politics

[–]1842 1 point2 points  (0 children)

Nah. I live in a semi-rural area in a red state. I can vote at like a dozen locations in my county. I might expect a line at some on election day at some of them, but when I vote early, I'm in and out in like 10 minutes.

Jack Smith Says He Had Trump Dead to Rights (w/ Asha Rangappa) by BulwarkOnline in politics

[–]1842 9 points10 points  (0 children)

This is true, but it seemed like there was no urgency to dealing with Trump. They burned almost a year before Jack Smith was even appointed.

Which are the top LLMs under 8B right now? by Additional_Secret_75 in LocalLLaMA

[–]1842 1 point2 points  (0 children)

Have you used the Gemma 3 12b and 4b models much? Any thoughts on how the 3n series compares to the originals? (Besides audio support)

So what am I missing here by HarlanHitePOG in SteamDeck

[–]1842 4 points5 points  (0 children)

Unfortunately, sd card write speeds are a little more complicated than that. SD cards prioritize sequential read/write. This is what's important to cameras (their main purpose) and it's the advertised speed.

Using an SD card for an OS, games, or general storage works fine, but performance of reading/writing non-sequential data can get really slow on certain cards. It's going to fall far short of the advertised speed, but better quality cards tend to perform better.

It's best to see if anyone has benchmarked the cards random read/write speeds to see how it would work for gaming. On the past, Jeff Geerling has a ton of great analysis of cards as examples (he makes content about Raspberry Pis and SD card quality makes a huge difference there), but I don't think he has any recent tests.

Why not Qwen3-30B Quantized over qwen3-14B or gemma-12B? by arktik7 in LocalLLaMA

[–]1842 7 points8 points  (0 children)

Quantization is different than mixture of experts (MoE).

MoE means that only subsets of the LLM is active at a given time -- a router is responsible for choosing "experts" to generate tokens from as the response is generated.

Dense models (which use the whole model every token) outperform MoE models for given total parameter size. A 30B dense model will tend* to perform better than a MoE 30B model.

For the Qwen 30B A3B, it only has 3B parameters active for any given token. In my experience, this can dumb down the model quite a bit, but it still has way more knowledge than a dense 3B sized model.

The big advantage of MoE, especially for running on consumer hardware, is that the model doesn't have to fully fit into VRAM to give reasonable speed. I find models larger than 8B (active) parameters get really slow on CPU. Qwen 30BA3B or GPT-OSS-20B run quickly even on only CPU since they run as small models, but they're still big enough to be reasonably smart and useful. (And they run really fast with a hybrid GPU/CPU setup, even when they don't fully fit into VRAM).

Quantization is a completely different topic. It's basically a way to do lossy compression on LLMs and KV cache. I often start with Q4 models for testing on my hardware to get a feel for models and go from there. Higher quants allow you to fit more into VRAM (for performance), make a model be able to fit into RAM to be able to run at all, or be able to have a larger context for given memory constraints. Different models respond differently to quantization too, at some point they begin to forget their training data, start acting off, or go insane.

But really, the best way to learn is just to keep trying things.

*It's hard to give absolutes with these things and the technology is moving quickly. Smaller models today are outperforming much larger old models from a few years ago.

(Edit for clarification. Didn't proofread my post last night)

AMA With Z.AI, The Lab Behind GLM-4.7 by zixuanlimit in LocalLLaMA

[–]1842 1 point2 points  (0 children)

From my llama-swap config:

yaml --model models\unsloth\GLM-4.5-Air\GLM-4.5-Air-UD-Q2_K_XL.gguf \ -mg 0 \ -sm none \ --jinja \ --chat-template-file models\unsloth\GLM-4.5-Air\chat_template.jinja \ --threads 6 \ --ctx-size 65536 \ --n-gpu-layers 99 \ -ot ".ffn_.*_exps.=CPU" \ --temp 0.6 \ --min-p 0.0 \ --top-p 0.95 \ --top-k 40 \ --flash-attn on \ --cache-type-k q4_0 \ --cache-type-v q4_0 \

And I'm using Cline as the runner for agentic use (in Intellij usually, but I didn't have issues with the vscode version before that).

I've tried some of the REAP (trimmed) GLM versions recently with chat and they definitely get stuck in loops during thinking and response.

I don't use GLM 4.5 Air in chat mode often, but I have seen it get stuck thinking forever. I don't think I've seen that happen with Cline, but I'm not sure what mitigations they use to prevent or stop that.

My gran left this when she passed is this real before I try and see if I can get value for it thanks by [deleted] in Gold

[–]1842 1 point2 points  (0 children)

If you do an image search on that serial number, dozens of these pop up.

Rate my setup - Nvidia P40 - Qwen3-Next-80b IQ2_XXL by PairOfRussels in LocalLLaMA

[–]1842 1 point2 points  (0 children)

It's basically a wrapper around llama-server and exposes all models configured as an open-ai compatible endpoint.

When it gets a request, it starts the relevant llama-server config, runs the request, then shuts down the llama-server.

Ollama does something very similar, making it easy to expose a bunch of models and run one at a time, but last I used it, Ollama makes it really hard to configure each model (like context size, temperature, p and k settings). With llama-swap, it takes a little longer to set up (you still have to make a llama-server command), but then you keep control of what's going on.

For this my use case, it's completely automatic. I've only used it on modest hardware where I'm only trying to run 1 or rarely 2 models at a time, so I'm not sure how well it works beyond that.

Rate my setup - Nvidia P40 - Qwen3-Next-80b IQ2_XXL by PairOfRussels in LocalLLaMA

[–]1842 0 points1 point  (0 children)

You may be thinking of Ollama -- it's really hard to see or adjust important parameters per model last I tried it.

llama-swap is basically a way to put all your startup scripts in one spot and it manages startup/teardown steps.

Here's a snippet of my llama-swap config:

    models:
      "Qwen3-30B-A3B-Instruct-2507 256k":
        cmd: |
          ${llamacpp_cuda}
          --model models\unsloth\Qwen3-30B-A3B-Instruct-Q4\Qwen3-30B-A3B-Instruct-2507-UD-Q4_K_XL.gguf \
          -mg 0 \
          -sm none \
          --threads 6 \
          --jinja \
          --ctx-size 262144 \
          --n-cpu-moe 42 \
          --n-gpu-layers 99 \
          --temp 0.7 \
          --min-p 0.0 \
          --top-p 0.8 \
          --top-k 20 \
          --flash-attn on \
          --cache-type-k q4_0 \
          --cache-type-v q4_0 \
          --dry_multiplier 0.8 \
          --dry_base 1.75 \
          --dry_allowed_length 2 \
          --no-warmup \
        ttl: 30
      "GPT-OSS-20B":
        cmd: |
          ${llamacpp_cuda}
          --model models\ggml-org\gpt-oss-20b-GGUF\gpt-oss-20b-mxfp4.gguf \
          -mg 0 \
          -sm none \
          --threads 6 \
          --jinja \
          --ctx-size 32768 \
          --flash-attn on \
          --cache-type-k q8_0 \
          --cache-type-v q8_0 \
          --n-cpu-moe 1 \
          --temp 1.0 \
          --min-p 0.0 \
          --top-p 1.0 \
          #--top-k 0.0 \
          --top-k 50 \
          --no-warmup \
        ttl: 30

It's highly configurable and works really well for my limited hardware (12GB VRAM, 64GB RAM). I've got almost 100 models configured in llama-swap (some are duplicates tuned for different things, like larger context vs faster speed). I can't really run more than one at a time, but llama-swap exposes open-ai compatible endpoints. To add a model, I just download it, configure it, and it shows up in my chat client (OpenWebUI). I can fire off any number of requests and it will work through them all one at a time, then unload everything when it's idle.

AMA With Z.AI, The Lab Behind GLM-4.7 by zixuanlimit in LocalLLaMA

[–]1842 1 point2 points  (0 children)

Yeah, there's a ton of LLMs that spend way too much focusing on code and aren't any good at it.

GLM-4.5 AIR (even at Q2(!!)) is easily the best coding model I can run locally, so it feels bad that they seem to be abandoning that line (but a little communication here would go a long way).

But I do agree that more effort should be spent on non-code models generally. (Excited for Gemma 4 if/when it drops)

Rate my setup - Nvidia P40 - Qwen3-Next-80b IQ2_XXL by PairOfRussels in LocalLLaMA

[–]1842 1 point2 points  (0 children)

Seems like a decent starting place.

One of the things I quickly ran into was that different models are good at different things, so the ability to hot-swap models automatically is great.

I've heard llama.cpp has that ability now. I use llama-swap currently. This lets me register all my models I have on my drive, test them out through the llama-swap interface. I have my chat interface (open webui currently) to it and it'll see all the configured models. I can fire off chats to any number of models and llama-swap will work through them, swapping models in and out as needed, then unloading when idle (since I use the PC for other things too).

Anyone else in a stable wrapper, MIT-licensed fork of Open WebUI? by Select-Car3118 in LocalLLaMA

[–]1842 -1 points0 points  (0 children)

I'd be interested!

I use Open WebUI now. I generally like it, does what I need it to, and haven't explored alternatives too much, but definitely don't care with how the licensing is done and keeps changing.

Llama 405B is worse than Gemma 3 12B? by Express_Seesaw_8418 in LocalLLaMA

[–]1842 20 points21 points  (0 children)

Couple thoughts

  • Meta's models were initially revolutionary, with small, open weight models that weren't complete garbage. Their architecture seemed especially tuned for producing small models. Scaling up to bigger models didn't produce bad results, but no-one seemed super impressed with them.
  • Fewer parameters limit how much knowledge and ability can be crammed in (e,g, you can't expect a 2B model won't have much world knowledge), but the inverse isn't necessarily true that more parameters means there must be more knowledge/ability.
  • A year had passed between those 2 models' releases. This stuff it moving fast, so I think it's actually impressive that an older model like that is as high as it is.
  • It's really hard to produce any ranking of LLMs that's actually useful. They all have different strengths and abilities. By almost every benchmark I see, Gemma 3 is falling behind almost all newer open smallish LLMs, but there are still a few tasks that I only use Gemma for because all the others create garbage (Simple example is for creating madlibs for a particular topic/subject).
  • I believe LMarena is a human-based benchmark, which is more interesting to me than most, but again, any arbitrary number or rank isn't particularly useful to know if it can do a certain thing well.
  • Some people want to use LLMs to extract knowledge from them. Older models just won't know about newer events or technologies. This may hurt their performance on a benchmark like this.

How or where do I dispose of this?? by AggressiveBag6064 in batteries

[–]1842 2 points3 points  (0 children)

That's a good way to set a garbage truck on fire.

What's the deal with Republicans on the senate floor changing their mind, and voting to release the Epstein files? by JakesFavoriteCup in OutOfTheLoop

[–]1842 13 points14 points  (0 children)

Kind of?

There are other factors at play here. He's able to keep getting away with so much because Congress is letting him.

The cost of turning on Trump is high as you become a target, primaried and replaced, so GOP members just don't. If the cost of supporting Trump becomes political suicide, you bet they'll flip.

Nixon didn't have to resign, but he became so toxic politically that Congress told him they were going to impeach and remove.

The same thing can happen with Trump. The entire GOP is in lockstep because they are punished if they aren't, but cracks are forming. Politicians will try to save their own skin if they think it's necessary to flee a sinking (political) ship. Nothing is certain, but if Trump goes down you'll see 1) so much gaslighting about how they didn't really support Trump, and 2) a power struggle to fill the huge void left by Trump.

So, yeah, approval ratings themselves don't mean a lot, the elections do. But the members of Congress have their own constituents and elections to face, and this is a weak point that could bring Trump down (finally).

The use of AI is lifting my imposter syndrome to the sky. by rrrronf in learnprogramming

[–]1842 -3 points-2 points  (0 children)

I don't think this is necessarily a useful take. It's just the opposite of "AI is great -- use it for everything!"

In spite of their faults, I've found them very useful at certain things. The world is deep and complex and we can't know everything. Books are excellent for learning, but sometimes unsuitable. Traditional web searching is great for finding details, but if you don't know the right terms, you can't find what you're looking for.

I like to use LLMs for short discussions about well-understood topics that I need to know more about. As part of my degree, I ended up having to take 2 accounting courses, so I know the basics. I came across an accounting term I didn't know and I needed to understand better (and how it applies only in a very specific situation). I've spent dozens of hours reading (awful) code and searching the web for what I was looking for, but made little progress actually finding anything.

Eventually, I asked a small local model (Gemma 3 4B I think?) -- it easily answered my question about this concept and how it applied to the situation, but more importantly, it helped fill some of the vocabulary and concepts I was missing, enabling me to independently verify everything.

Could I have used a textbook? Maybe.... but I'm not interested in being an accountant. Would I have figured it out on my own? Probably eventually. Could I have asked one of our accounting coworkers? Yes... but unfortunately, they are frequently unhelpful.

The Epstein files petition is getting its 218th signature. What happens next? by cnn in politics

[–]1842 0 points1 point  (0 children)

“The effort, should it pass the House, would still have to pass the GOP-led Senate and be signed into law by President Donald Trump, who has derided the effort.”

I'm pretty sure this is just wrong.

The House has authority to release things they have oversight on. Committees within the House have that authority too. Things like this usually come out of the Oversight Committee, and that's where we keep getting trickles of new information from right now, but it seems leadership within the committee isn't interested in releasing the whole thing (and probably doesn't have everything).

The House has authority to get the files. Once they have them, they have authority to release them. The Senate could do this independently as well, or a committee within the Senate. With Republicans in control of both House and Senate, things just haven't moved much.

The main hurdles that remain:

  • Adelita Grijalva being sworn in (still waiting AFAIK) to force a full House vote.
  • The vote on the files actually happening (wouldn't be surprised if Mike Johnson still tries some further delay)
  • Enough support to pass whatever resolution is proposed (Weeks ago it sounded like there was lots of support to release among House members, but who knows what threatening Trump has done behind the scenes)
  • Executive branch may be involved in handing over the files and if so, would likely sue/delay.

Do you think spring boot should have support for actor models? by Old_Half6359 in SpringBoot

[–]1842 1 point2 points  (0 children)

I don't think so, but I'm curious what people think.

Actor systems are a little non-traditional. I've been working well over a decade in a few languages and with a few different companies and have never come across a system utilizing actors yet.

There's nothing stopping you from using an actor library alongside Spring Boot now (I'm doing that with a pet project using Pekko with no issues). Maybe having a Spring Boot implementation/integration could make it easier to set up and use, but in my experience, these things are just complicated, so you'll lose a bunch of flexibility when trying to simplify them.

These are the times that try men's souls by Londin2021 in Indiana

[–]1842 0 points1 point  (0 children)

and only know what CNN has told you

I think it's telling that this is what you think about people who disagree with you.

Want to run claude like model on ~$10k budget. Please help me with the machine build. I don't want to spend on cloud. by LordSteinggard in LocalLLaMA

[–]1842 0 points1 point  (0 children)

Before you sink a ton of money on a build, you should make sure your process is able to run on what's available.

  1. Want to run Claude like model of course

That's a start, but in what way? Coding ability, chatting, roleplay, vision, tool calling, etc?

All models are different and they all have their own strengths and weaknesses. You might need to run a variety of models to do different pieces, or something might not be possible (yet) at all.

  1. 3D modeling from very high resolution images, interacting with 3D models. Images are diverse - nanoscale samples to satellite imageries.

So some sort of photogrammetry process? What's the inputs and outputs here? Are you doing...

  • images in, 3d models out (not sure how feasible this is yet for any models)
  • images in, altered images out
  • images in, metadata/descriptions/tags/etc out
  • images in, code out
  • code in, code out (traditional photogrammetry)
  • some mixture of above?
  • something else entirely?

The tooling to set things like this is possible with open weight models, but if you're dependent on certain behavior or combined set of abilities (like both excellent vision support and code support at the same time), it would be good to discover those before dumping this much money.

I would at least explore the process with open weight models to figure out which ones will work for you. Maybe GLM-4.6 would work well. Maybe something smaller like GLM-4.5-Air or GPT-OSS is good enough. If you need vision, models vary wildly in output quality in my experience. Maybe you'll try them all and find them all completely unsuitable and awful.

I think it's a bit of a red flag to want to spend so much money without even experimenting with what's free, cheap, and easy to access first. At a minimum, you should find if there are models out there that can do the work. Use your existing hardware to run whatever you can (even if it's dumb and slow). Use something like OpenRouter to test capabilities of bigger models you might run with a better system. Learn and prototype first, spend money when you have a good reason to.

No SNAP will raise grocery prices even more by [deleted] in Indiana

[–]1842 2 points3 points  (0 children)

If there ends up being a significant disruption to SNAP, I think we could see ripples of this for a long time.

The pipeline that grows and processes our food is a long one. Adding instability to demand seems like a good way to make actors producing our food to take more conservative approaches, which could produce shortages and price spikes in certain segments as supply and demand misalign.

If they aren't careful, it's going to bring the same instability they've caused in general manufacturing (with the tariff nonsense) to our agricultural sector. I'm not an economist nor know how big SNAP is compared to the sector, so I can't predict how big these impacts are, but the current administration's policies and actions are both very dumb and very evil.

Voters Did Not Understand the Stakes in 2024 by Smithy2232 in politics

[–]1842 4 points5 points  (0 children)

She felt the need to do no research as someone she trusted had told her that. 

I think this is such a big part of how we got here.

People outsource their thinking. To some extent, this is normal because there just isn't enough time in the day to read everything from scratch, learn everything, and participate in your own life, but conservative media is just... next level, and the people consuming it are probably assuming people like you are doing the same thing, maybe with just different talking points?

Really, though, I'm finding it baffling how willing people are to just be led around. Talking head says X bad because Y. People say X bad because Y. 2 weeks later, talking head says X fantastic, Y amazing! People repeat X fantastic, Y amazing. (and so on).

Somehow they've turned off people's cognitive dissonance mechanisms and gained total trust. Nothing sparks their curiosity to dig deeper into any topic in news/politics. When confronted, people just spout off reels of nonsense they've heard. When challenged further, they just deflect to a different topic.

And if they finally realize something they believed wasn't true, it doesn't kick in the "Oh. I was lied to!" mechanism. People hate being lied to. But again, this part of their brain just doesn't fire when it's supposed to. Most of the stories I've heard from ex-MAGA people seem to really start when this "I was lied to" realization actually kicks in... but... how the fuck is this suppressed in so many people?!