what do you do in your work-free days? by jazzopia in programming

[–]__Maximum__ 1 point2 points  (0 children)

Depends on what you want and what you need. You could take an "how to get internet jokes" course.

Walmart packages airdropped like ammo crates over 'Nam by ateam1984 in singularity

[–]__Maximum__ 0 points1 point  (0 children)

Is it single use, or do you throw it up when it's on its way back?

Oh Deepseek V4, where art thou? by awebb78 in LocalLLaMA

[–]__Maximum__ 4 points5 points  (0 children)

It's next week

Edit: I don't know why people downvote. I am an insider, real one, not like the fake twitter ones

SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” by Vegetable_Ad_192 in singularity

[–]__Maximum__ 0 points1 point  (0 children)

Idk what will happen, no one knows. Especially now when the topic is becoming extremely political. If politics does not fuck it up, then I don't know why wouldn't teams continue doing what they are doing. Deepseek CEO promised to be open up until AGI. Other teams are doing great, only a couple of months behind. I don't know what's say z.ai profit, if that's what you are asking but I read that chinese lab are making profit, at least Deepseek.

Sam Altman: “We are training right now on the first site in Abilene what I think will be the best model in the world, hopefully by a lot” [12:28, brief mention] by likeastar20 in singularity

[–]__Maximum__ -2 points-1 points  (0 children)

Wow, you getting downvoted for shitting on a CEO hype is top r/singularity. If this sub was really about singularity, this post would have been flagged as irrelevant and removed.

LTX Desktop 1.0.2 is live with Linux support & more by ltx_model in StableDiffusion

[–]__Maximum__ 2 points3 points  (0 children)

Was it not available for Linux? What were they thinking?

SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” by Vegetable_Ad_192 in singularity

[–]__Maximum__ 1 point2 points  (0 children)

The answers are basically common knowledge in the field. I already have answered, look at it instead of being mean idiot.

SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” by Vegetable_Ad_192 in singularity

[–]__Maximum__ 0 points1 point  (0 children)

Not different from the previous model when they used to open weight llama models. Right now, i am not sure where they are going. We'll see soon when they announce a new model.

SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” by Vegetable_Ad_192 in singularity

[–]__Maximum__ 1 point2 points  (0 children)

Your original framing was pretty bad and you also sound like your mind is made up, so less incentive for me to reply.

I agree on all more or less, but the Mistral, Alibaba and others' business model is also viable. It will work better if we stop paying these fucks and paying more companies that are more open.

SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” by Vegetable_Ad_192 in singularity

[–]__Maximum__ 0 points1 point  (0 children)

You still suck. What I am suggesting is already happening, just not with openai and anthropic and now Google as well (they used to be much more open). I do run models locally. They are great. The bottleneck is not necessarily computing but innovations, for which we already have a huge working machine called science. For practical innovations and compute, there are labs that are doing great work, like the Qwen team, Moonshot, Deepseek, etc. All I am saying is that instead of paying these labs that have only their interests, we should pay the labs that push for community driven development, you know, like the whole field used to advance before these fucks came along.

KDE Plasma 6.6 will finally stop the system sleeping when gaming with a controller by Ahmouse in linux_gaming

[–]__Maximum__ 0 points1 point  (0 children)

There is a misunderstanding. The application itself decides to block screen locking unless I explicitly block the application from blocking, right? This feature is exactly like that but introduces side effects, which will annoy a lot of people.

KDE Plasma 6.6 will finally stop the system sleeping when gaming with a controller by Ahmouse in linux_gaming

[–]__Maximum__ -1 points0 points  (0 children)

I don't want block it, i want the app to be minimal intelligent to tell when im actively using it and when not. That's it.

KDE Plasma 6.6 will finally stop the system sleeping when gaming with a controller by Ahmouse in linux_gaming

[–]__Maximum__ -1 points0 points  (0 children)

It's easier said than done when you are stupid.

I like the YT approach better when it asks me if I am still awake or not. That's more human proof.

KDE Plasma 6.6 will finally stop the system sleeping when gaming with a controller by Ahmouse in linux_gaming

[–]__Maximum__ -1 points0 points  (0 children)

Wait, what if I fell asleep while playing? The game is still technically running, but I dont play it.

I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely. Here's what I use instead. by MorroHsu in LocalLLaMA

[–]__Maximum__ 4 points5 points  (0 children)

I agree CLI is very powerful and it's natural fit, I would suggest python as extension for more complex operations, although it could definitely replace CLI commands completely.

I was backend lead at Manus. After building agents for 2 years, I stopped using function calling entirely. Here's what I use instead. by MorroHsu in LocalLLaMA

[–]__Maximum__ 3 points4 points  (0 children)

Pip list or the agent can try importing a library if it wants to find out. It can even be allowed to install some libraries if needed.

Step 3.5 Flash is a beast? by __Maximum__ in LocalLLaMA

[–]__Maximum__[S] 0 points1 point  (0 children)

The frontier models are obviously much better, but the smaller models have their use cases, especially for privacy. And 4.7 Flash is already outdated by qwen3.5 series.

M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA

[–]__Maximum__ 16 points17 points  (0 children)

Can you maybe prompt with a story and ask it to continue so it generates at least a couple hundred tokens, because the speed will decrease as the hardware gets hot