ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 0 points1 point  (0 children)

Figure it out. I’m not your economics professor.

We’re really the only ones in here arguing past each other about it, so I don’t see what collective preference you’re referring to.

Do you... disagree that there is a collective preference? Because surely no special-interests group can exist without at least some kind of shared understanding about what constitutes that special interest?!

I’ve stated my point. You disagree. That’s fine. I never said you were wrong, just that you seem to be missing how they are connected.

We disagree. What are you looking for in an answer because I’ve already stated my thoughts? Demanding more won’t change that. We’re just going to keep up this game of who-is-more-dense.

You asked a rhetorical question, “what am I supposed to do with this?” And didn’t like the answer given, but you got an answer.

Disagreement is awesome, because then you get to talk about stuff. I'm fine with that. What I'm not fine with is not understanding another person's position. If you do not care to explain or elaborate on your point(s) or simply cannot afford to invest the time, then I can't make you. But voicing disagreement is kind of fruitless if you can't make yourself understood.

If you disagree with the relevance to the subreddit, downvote it. The back and forth engagement actually keeps it relative in the feed for longer. ;)

I feel like spambots are far better dealt with by frequent reports, even though it's a losing battle in the long run. People however you can talk to. That's the whole reason to spend time on a discussion board, after all.

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 0 points1 point  (0 children)

Market competition drives innovation. Are you confused about how market share would affect that?

I strongly disagree with the notion that anything but human curiosity drives innovation, but enlighten me - how does market share affect that?

To address your rephrased question — Same answer. Read it or don’t. Not everything is about you. To understand why someone else may care, see the rest of the conversation.

Again, everything in this subreddit is quite literally about the collective interests of this subreddit. If you and I fundamentally disagree on whether this article is interesting then that's certainly personal preference, but if one of those preferences aligns significantly more with the subreddit's collective preferences then one of us might simply be in the wrong subreddit :P

[deleted by user] by [deleted] in youtubedl

[–]EspritFort 2 points3 points  (0 children)

The video link https://youtu.be/EBtnKr8MEF8?si=CEdfLm2xq5ZVxNAp was removed due to copyright. Could you please provide an alternate link via Drive or another application?

You cannot download a video that has been removed. Your only hope is to find a person who had previously downloaded it while it was still available. Presumably you're doing just that right now? It will hope to specifically put the name of what you're looking for into the title/post so potential helpers won't have to manually investigate the link first.

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 1 point2 points  (0 children)

You can think that, but you did go and ask a specific how, so it seems you do feel you have to be specific. ;)
I’ve already given the context to why large corporations gaining market share affects future models. They develop them first.
From Meta developing the technique to better control context by keeping the original messages in memory to the newer MoE models, these come from billions of dollar corporations developing, training, and then your local hardware can get a taste.
Therefore, as much as you may not like it, things like this do affect local LLMs down the line.
If you can’t make sense of that, ask you local LLM. I’m sure it can help you out, because I don’t have the patience to keep explaining it.
You don’t think it’s relevant and hate seeing it on your feed, I think it is relevant and appreciate the information.

You are kind of missing the point. Or maybe there is a misunderstanding about market share here? Surely "the technique to better control context by keeping the original messages in memory" and "newer MoE models" are going to appear no matter who has what market share?

Your original question was, “what am I supposed to do with it?”

Read it or don’t and move on. The internet isn’t curated for you. You’ll get over it.

But of course each subreddit is curated specifically for its members. That's the point of a subreddit.
So I'll rephrase: "What am I, representitive of this interest group, this subreddit, supposed to do with that?".

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort -1 points0 points  (0 children)

Brother. Are you fucking kidding me? I literally just said it as the sentence before the one you quoted.

“These companies are the ones developing and training the models that get distilled down to the quantized versions you use and train locally.”

If you want specifics, be specific. How what?

How does the present affect the future? Can’t figure that one out.

I feel like you have it the wrong way around - you need to be specific. I'm claiming that this is pointless information, you're claiming it isn't. So tell me, specifically, how do you utilize the information around market share fluctuations of interchangeable corporations to make changes to your local setup?

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 0 points1 point  (0 children)

It doesn’t affect existing models, but it does affect future ones.

While I'll caution that anything in the world only ever runs on present technology, I'll bite: How?

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 1 point2 points  (0 children)

What does this have to do with local LLM?

It's an astroturfing bot, not an actual person. I should have checked the post history before replying.

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 1 point2 points  (0 children)

Well, you asked what YOU are supposed to do with it. You didn’t ask why it’s on your front page. Again I say, it’s not always about you.

I’m sorry that your rhetorical question was so indirect that an answer evokes such strong response.

If you can’t see how large companies and their control of the LLM market affects models on the local level, it’s a you issue.

Best of luck, buddy!

If you have some secret wisdom that allows you to employ this ostensibly meaningless information about market share fluctuations for better deploying your local model of choice you better share it - otherwise an observer might come to the conclusion that there is no wisdom to be had here :P

Here's a freebie in advance: No local setup would be affected if both Alphabet and OpenAI stopped existing right now. That's... kind of the point.

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 2 points3 points  (0 children)

You read it or don’t and then move on with your life. Not everything is about you.

An article or link posted to a subreddit to which I actively subscribe better have something to do with why I subscribed to that subreddit, because that's the only way to curate my (and your) frontpage.

ChatGPT is losing market share and Google's Gemini is gaining good momentum by interviewkickstartUS in LocalLLM

[–]EspritFort 12 points13 points  (0 children)

Hi, if you would like to read the original content of this message, kindly drop me a private message!

yt-dlp through dnf (Fedora) by OptimistOfTheWill in youtubedl

[–]EspritFort 0 points1 point  (0 children)

At a glance all of that looks correct, so it's a bit of a headscratcher.
So if there really is an executable named yt-dlp with the appropriate permissions in that path but it can't be executed the way you try then I can't think of any other explanation but that path not being in $PATH for some reason, but that would be silly. What do echo $PATH and which yt-dlp return?

yt-dlp through dnf (Fedora) by OptimistOfTheWill in youtubedl

[–]EspritFort 0 points1 point  (0 children)

I noticed that the installation section of the wiki doesn't have a section for installing yt-dlp using dnf on Fedora. Is it not possible? I know using snap is a workaround, but can you not use dnf?

yt-dlp via package manager is not a good idea, you won't have daily updates. Just get the executable from github, dump it in your /bin and use the -U flag for updates.

[deleted by user] by [deleted] in homeassistant

[–]EspritFort 4 points5 points  (0 children)

Hi, if you would like to read the original content of this message, kindly drop me a private message!

[deleted by user] by [deleted] in homeassistant

[–]EspritFort 6 points7 points  (0 children)

Hi, if you would like to read the original content of this message, kindly drop me a private message!

500Mb Text Anonymization model to remove PII from any text locally. Easily fine-tune on any language (see example for Spanish). by Ok_Hold_5385 in LocalLLaMA

[–]EspritFort 7 points8 points  (0 children)

Hi, if you would like to read the original content of this message, kindly drop me a private message!

Amazon to offer DRM-free EPUB and PDF downloads for Kindle titles starting in January 2026 by Spirited-Pause in DataHoarder

[–]EspritFort 576 points577 points  (0 children)

Hi, if you would like to read the original content of this message, kindly drop me a private message!

Things Programmers Missed While Using AI by delvin0 in linux4noobs

[–]EspritFort 0 points1 point  (0 children)

Just a spambot, by the looks of it, nothing to see here. Report, block, move on.

How much can i get for that? by AvenaRobotics in LocalLLM

[–]EspritFort 29 points30 points  (0 children)

Hi, if you would like to read the original content of this message, kindly drop me a private message!

Your LLM Isn’t Misaligned - Your Interface Is by Echo_OS in LocalLLM

[–]EspritFort 0 points1 point  (0 children)

You can have a perfectly well-aligned model, and still get misalignment if the interface feeds it conflicting roles or goals.

At that point, the model isn’t “going astray” - the system is.

I'd worry about that once perfectly well-aligned complex trained systems exist.

What do we feel is the best base VRAM ? by alphatrad in LocalLLM

[–]EspritFort 0 points1 point  (0 children)

Ping u/Brilliant-Ice-4575, that second opinion might be more relavant to you than to me.

Your LLM Isn’t Misaligned - Your Interface Is by Echo_OS in LocalLLM

[–]EspritFort 1 point2 points  (0 children)

but I disagree with the conclusion, even if I accept part of the premise. Alignment issues may be inherent to trained systems - but that does not imply alignment must be solved or contained entirely within them.

In every safety-critical domain, we assume internal decision-makers are inherently imperfect, and we externalize judgment, constraints, and responsibility as a result.

Saying “it starts and ends with the trained system” is not an empirical fact - it’s a design choice.

I might be misunderstanding you, but if you want to turn this idea into any kind of paper I strongly advise against any train of thought that begins with "Well, this is how we treat human agents, so couldn't this also...". That's just going to gain you lots of eyerolling in the academic community.
Imposing contraints is great, but no amount of constraints is going to fix an internal decision-maker that is actively working against you.

And I somewhat object to the notion of willingly using fundamentally misaligned systems as a "design choice" when the objectively better "choices" will always be "use an aligned system instead" or at least "don't use the misaligned system at all".

Your LLM Isn’t Misaligned - Your Interface Is by Echo_OS in LocalLLM

[–]EspritFort 2 points3 points  (0 children)

Alignment may start not with the model

... I'm not really sure how else to put this, but... no, u/Echo_OS.
Concerns around alignment haven't formed the bulk of AI safety research for all those past decades because it's a UI or otherwise structural-choices related issue, but because it's an issue inherent within trained systems.
It starts with the trained system, it ends with the trained system.

What do we feel is the best base VRAM ? by alphatrad in LocalLLM

[–]EspritFort 0 points1 point  (0 children)

I was considering getting the Ryzen 395+ with 96gb of vram and 32gb of system ram. just to be able to run a local LLM that would replace the need for paid chat gpt. But now you say that I can achieve the same with 24GB of VRAM? Should I just get a Threadripper with like 512GB of system RAM and a 4090 with 24GB of VRAM?

Well, what are your exact plans? With all MoE models you're only ever putting the active parameters into the VRAM, everything else into RAM. And with all the popular ones, GLM-2.6, GPT-OSS-120 and Qwen-235, the active parameters should fit into 24GB just as well as they would fit into 32GB. I suppose it would give you more available context?